Small n correlations cannot be trusted

This post illustrates two important effects of sample size on the estimation of correlation coefficients: lower sample sizes are associated with increased variability and lower probability of replication. This is not specific to correlations, but here we’re going to have a detailed look at what it means when using the popular Pearson’s correlation (similar results are obtained using Spearman’s correlation, and the same problems arise with regression). The R code is available on github.


UPDATE: 2018-06-02

In the original post, I mentioned non-linearities in some of the figures. Jan Vanhove replied on Twitter that he was not getting any, and suggested a different code snippet. I’ve updated the simulations using his code, and now the non-linearities are gone! So thanks Jan!

Johannes Algermissen mentioned on Twitter that his recent paper covered similar issues. Have a look! He also reminded me about this recent paper that makes points very similar to those in this blog.

Gjalt-Jorn Peters mentioned on Twitter that “you can also use the Pearson distribution in package suppdists. Also see pwr.confintR to compute the required sample size for a given desired accuracy in parameter estimation (AIPE), which can also come in handy when planning studies”.

Wolfgang Viechtbauer‏ mentioned on Twitter “that one can just compute the density of r directly (no need to simulate). For example: link. Then everything is nice and smooth”.

UPDATE: 2018-06-30

Frank Harrell wrote on Twitter: “I’ll also push the use of precision of correlation coefficient estimates in justifying sample sizes. Need n > 300 to estimate r. BBR Chapter 8″


Let’s start with an example, shown in the figure below. Nice scatterplot isn’t it! Sample size is 30, and r is 0.703. It seems we have discovered a relatively strong association between variables 1 and 2: let’s submit to Nature or PPNAS! And pollute the literature with another effect that won’t replicate!

figure_random_correlation

Yep, the data in the scatterplot are due to chance. They were sampled from a population with zero correlation. I suspect a lot of published correlations might well fall into that category. Nothing new here, false positives and inflated effect sizes are a natural outcome of small n experiments, and the problem gets worse with questionable research practices and incentives to publish positive new results. 

To understand the problem with estimation from small n experiments, we can perform a simulation in which we draw samples of different sizes from a normal population with a known Pearson’s correlation (rho) of zero. The sampling distributions of the estimates of rho for different sample sizes look like this: 

figure_sampling_distributions

 

Sampling distributions tell us about the behaviour of a statistics in the long run, if we did many experiments. Here, with increasing sample sizes, the sampling distributions are narrower, which means that in the long run, we get more precise estimates. However, a typical article reports only one correlation estimate, which could be completely off. So what sample size should we use to get a precise estimate? The answer depends on:

  • the shape of the univariate and bivariate distributions (if outliers are common, consider robust methods);

  • the expected effect size (the larger the effect, the fewer trials are needed – see below);

  • the precision we want to afford.

For the sampling distributions in the previous figure, we can ask this question for each sample size:

What is the proportion of correlation estimates that are within +/- a certain number of units from the true population correlation? For instance:

  • for 70% of estimates to be within +/- 0.1 of the true correlation value (between -0.1 and 0.1), we need at least 109 observations;

  • for 90% of estimates to be within +/- 0.2 of the true correlation value (between -0.2 and 0.2), we need at least 70 observations. 

These values are illustrated in the next figure using black lines and arrows. The figure shows the proportion of estimates near the true value, for different sample sizes, and for different levels of precision. The bottom-line is that even if we’re willing to make imprecise measurements (up to 0.2 from the true value), we need a lot of observations to be precise enough and often enough in the long run.  

figure_precision

 

The estimation uncertainty associated with small sample sizes leads to another problem: effects are not likely to replicate. A successful replication can be defined in several ways. Here I won’t consider the relatively trivial case of finding a statistically significant (p<0.05) effect going in the same direction in two experiments. Instead, let’s consider how close two estimates are. We can determine, given a certain level of precision, the probability to observe similar effects in two consecutive experiments. In other words, we can find the probability that two measurements differ by at most a certain amount. Not surprisingly, the results follow the same pattern as those observed in the previous figure: the probability to replicate (y-axis) increases with sample size (x-axis) and with the uncertainty we’re willing to accept (see legend with colour coded difference conditions).  

 

figure_replication

In the figure above, the black lines indicates that for 80% of replications to be at most 0.2 apart, we need at least 83 observations.

So far, we have considered samples from a population with zero correlation, such that large correlations were due to chance. What happens when there is an effect? Let see what happens for a fixed sample size of 30, as illustrated in the next figure. 

figure_sampling_distributions_rho

 

As a sanity check, we can see that the modes of the sampling distributions progressively increase with increasing population correlations. More interestingly, the sampling distributions also get narrower with increasing effect sizes. As a consequence, the larger the true effect we’re trying to estimate, the more precise our estimations. Or put another way, for a given level of desired precision, we need fewer trials to estimate a true large effect. The next figure shows the proportion of estimates close to the true estimate, as a function of the population correlation, and for different levels of precision, given a sample size of 30 observations.

figure_precision_rho

 

Overall, in the long run, we can achieve more precise measurements more often if we’re studying true large effects. The exact values will depend on priors about expected effect sizes, shape of distributions and desired precision or achievable sample size. Let’s look in more detail at the sampling distributions for a generous rho = 0.4.

figure_sampling_distributions_rho04

 

The sampling distributions for n<50 appear to be negatively skewed, which means that in the long run, experiments might tend to give biased estimates of the population value; in particular, experiments with n=10 or n=20 are more likely than others to get the sign wrong (long left tail) and to overestimate the true value (distribution mode shifted to the right). From the same data, we can calculate the proportion of correlation estimates close to the true value, as a function of sample size and for different precision levels.

figure_precision_rho04

 

We get this approximate results:

  • for 70% of estimates to be within +/- 0.1 of the true correlation value (between 0.3 and 0.5), we need at least 78 observations;

  • for 90% of estimates to be within +/- 0.2 of the true correlation value (between 0.2 and 0.6), we need at least 50 observations. 

You could repeat this exercise using the R code to get estimates based on your own priors and the precision you want to afford.

Finally, we can look at the probability to observe similar effects in two consecutive experiments, for a given precision. In other words, what is the probability that two measurements differ by at most a certain amount? The next figure shows results for differences ranging from 0.05 (very precise) to 0.4 (very imprecise). The black arrow illustrates that for 80% of replications to be at most 0.2 apart, we need at least 59 observations.

figure_replication_rho04

 

We could do the same analyses presented in this post for power. However, I don’t really see the point of looking at power if the goal is to quantify an effect. The precision of our measurements and of our estimations should be a much stronger concern than the probability to flag any effect as statistically significant (McShane et al. 2018).

There is a lot more to say about correlation estimation and I would recommend in particular these papers from Ed Vul and Tal Yarkoni, from the voodoo correlation era. More recently, Schönbrodt & Perugini (2013) looked at the effect of sample size on correlation estimation, with a focus on precision, similarly to this post. Finally, this more general paper (Forstmeier, Wagemakers & Parker, 2016) about false positives is well worth reading.

Advertisements

Test-retest reliability assessment using graphical methods

UPDATE (2018-05-17): as explained in the now updated previous post, the shift function for pairwise differences, originally described as a great tool to assess test-retest reliability, is completely flawed. The approach using scatterplots remains valid. If you know of other graphical methods, please leave a comment.


Test-retest reliability is often summarised using a correlation coefficient, often without illustrating the raw data. This is a very bad idea given that the same correlation coefficient can result from many different configurations of observations. Graphical representations are thus essential to assess test-retest reliability, as demonstrated for instance in the work of Bland & Altman.

The R code for this post is on github.

Example 1: made up data

Let’s look at a first example using made up data. Imagine that reaction times were measured from 100 participants in two sessions. The medians of the two distributions do not differ much, but the shapes do differ a lot, similarly to the example covered in the previous post.

figure_kde

The kernel density estimates above do not reveal the pairwise associations between observations. This is better done using a scatterplot. In this plot, strong test-retest reliability would show up as a tight cloud of points along the unity line (the black diagonal line).

figure_scatter

Here the observations do not fall on the unity line: instead the relationship leads to a much shallower slope than expected if the test-retest reliability was high. For fast responses in session 1, responses tended to be slower in session 2. Conversely, for slow responses in condition 1, responses tended to be faster in condition 2. This pattern would be expected if there was regression to the mean [wikipedia][ Barnett et al. 2005], that is, particularly fast or particularly slow responses in session 1 were due in part to chance, such that responses from the same individuals in session 2 were closer to the group mean. Here we know this is the case because the data are made up to have that pattern.

We can also use a shift function for dependent group to investigate the relationship between sessions, as we did in the previous post.

figure_sf_dhd

The shift function reveals a characteristic  difference in spread between the two distributions, a pattern that is also expected if there is regression to the mean. Essentially, the shift function shows how  the distribution in session 2 needs to be modified to match the distribution in session 1: the lowest deciles need to be decreased and the highest deciles need to be increased, and these changes should be stronger as we move towards the tails of the distribution. For this example, these changes would be similar to an anti-clockwise rotation of the regression slope in the next figure, to align the cloud of observations with the black diagonal line.  

figure_scatter_regline

To confirm these observations, we also perform a shift function for pairwise differences. 

 

This second type of shift function reveals a pattern very similar to the previous one. In the [previous post], I wrote that this “is re-assuring. But there might be situations where the two versions differ.” Well, here are two such situations…

Example 2: ERP onsets

Here we look at ERP onsets from an object detection task (Bieniek et al. 2016). In that study, 74 of our 120 participants were tested twice, to assess the test-retest reliability of different measurements, including onsets. The distributions of onsets across participants is positively skewed, with a few participants with particularly early or late onsets. The distributions for the two sessions appear quite similar.   

figure_ERP_kde

With these data, we were particularly interested in the reliability of the left and right tails: if early onsets in session 1 were due to chance, we would expect session 2 estimates to be overall larger (shifted to the right); similarly, if late onsets in session 1 were due to chance, we would expect session 2 estimates to be overall smaller (shifted to the left). Plotting session 2 onsets as a function of session 1 onsets does not reveal a strong pattern of regression to the mean as we observed in example 1. 

figure_ERP_scatter1

Adding a loess regression line suggests there might actually be an overall clockwise rotation of the cloud of points relative to the black diagonal.

figure_ERP_scatter1_regline

The pattern is even more apparent if we plot the difference between sessions on the y axis. This is sometimes called a Bland & Altman plot (but here without the SD lines).

figure_ERP_scatter2_regline

However, a shift function on the marginals is relatively flat.

figure_ERP_sf_dhd

Although there seems to be a linear trend, the uncertainty around the differences between deciles is large. In the original paper, we wrote this conclusion (sorry for the awful frequentist statement, I won’t do it again):

“across the 74 participants tested twice, no significant differences were found between any of the onset deciles (Fig. 6C). This last result is important because it demonstrates that test–retest reliability does not depend on onset times. One could have imagined for instance that the earliest onsets might have been obtained by chance, so that a second test would be systematically biased towards longer onsets: our analysis suggests that this was not the case.”

That conclusion was probably wrong, because the shift function for dependent marginals is inappropriate to look at test-retest reliability. Inferences should be made on pairwise differences instead. So, if we use the shift function for pairwise differences, the results are very different! A much better diagnostic tool is to plot difference results as a function of session 1 results. This approach suggests, in our relatively small sample size:

 

  • the earlier the onsets in session 1, the more they increased in session 2, such that the difference between sessions became more negative;
  • the later the onsets in session 1, the more they decreased in session 2, such that the difference between sessions became more positive. 

This result and the discrepancy between the two types of shift functions is very interesting and can be explained by a simple principle: for dependent variables, the difference between 2 means is equal to the mean of the individual pairwise differences; however, this does not have to be the case for other estimators, such as quantiles (Wilcox & Rousselet, 2018).

Also, tThe discrepancy shows that I reached the wrong conclusion in a previous study because I used the wrong analysis. Of course, there is always the possibility that I’ve made a terrible coding mistake somewhere (that won’t be the first time – please let me know if you spot a fatal mistake). So l Let’s look at another example using published clinical data in which regression to the mean was suspected.

Example 3: Nambour skin cancer prevention trial

The data are from a cancer clinical trial described by Barnett et al. (2005). Here is Figure 3 from that paper:

barnett-ije-2005

“Scatter-plot of n = 96 paired and log-transformed betacarotene measurements showing change (log(follow-up) minus log(baseline)) against log(baseline) from the Nambour Skin Cancer Prevention Trial. The solid line represents perfect agreement (no change) and the dotted lines are fitted regression lines for the treatment and placebo groups”

Let’s try to make a similarly looking figure.

figure_nambour_scatter

Unfortunately, the original figure cannot be reproduced because the group membership has been mixed up in the shared dataset… So let’s merge the two groups and plot the data following our shift function convention, in which the difference is session 1 – session 2.

figure_nambour_scatter2

Regression to the mean is suggested by the large number of negative differences and the negative slope of the loess regression: participants with low results in session 1 tended to have higher results in session 2. This pattern can also be revealed by plotting session 2 as a function of session 1.

figure_nambour_scatter3

The shift function for marginals suggests increasing differences between session quantiles for increasing quantiles in session 1.

figure_nambour_sf_dhd

This result seems at odd with the previous plot, but it is easier to understand if we look at the kernel density estimates of the marginals. Thus, plotting difference scores as a function of session 1 scores probably remains the best strategy to have a fine-grained look at test-retest results.

figure_nambour_kde

A shift function for pairwise differences shows a very different pattern, consistent with the regression to the mean suggested by Barnett et al. (2005).

 

Conclusion

To assess test-retest reliability, it is very informative to use graphical representations, which can reveal interesting patterns that would be hidden in a correlation coefficient. Unfortunately, there doesn’t seem to be a magic tool to simultaneously illustrate and make inferences about test-retest reliability.

It seems that the shift function for pairwise differences is an excellent tool to look at test-retest reliability, and to spot patterns of regression to the mean. The next steps for the shift function for pairwise differences will be to perform some statistical validations for the frequentist version, and develop a Bayesian version.

That’s it for this post. If you use the shift function for pairwise differences to look at test-retest reliability, let me know and I’ll add a link here.

References

Barnett, A.G., van der Pols, J.C. & Dobson, A.J. (2005) Regression to the mean: what it is and how to deal with it. Int J Epidemiol, 34, 215-220.

Bland JM, Altman DG. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. Lancet, i, 307-310.

Bieniek, M.M., Bennett, P.J., Sekuler, A.B. & Rousselet, G.A. (2016) A robust and representative lower bound on object processing speed in humans. The European journal of neuroscience, 44, 1804-1814.

Wilcox, R.R. & Rousselet, G.A. (2018) A Guide to Robust Statistical Methods in Neuroscience. Curr Protoc Neurosci, 82, 8 42 41-48 42 30.

A new shift function for dependent groups?

UPDATE (2018-05-17): the method suggested here is completely bogus. I’ve edited the post to explain why. To make inferences about differences scores, use the difference asymmetry function or make inferences about the quantiles of the differences (Rousselet, Pernet & Wilcox, 2017).


The shift function is a graphical and inferential method that allows users to quantify how two distributions differ. It is a frequentist tool that also comes in several Bayesian flavours, and can be applied to independent and dependent groups. The version for dependent groups uses differences between the quantiles of each group. However, for paired observations, it would be also useful to assess the quantiles of the pairwise differences. This is what the this new shift function does was supposed to do.

Let’s consider the fictive reaction time data below, generated using exGaussian distributions (n = 100 participants).

figure_kde

The kernel density estimates suggest interesting differences: condition 1 is overall more spread out than condition 2; as a result, the two distributions differ in both the left (fast participants) and right (slow participants) tails. However, this plot does not reveal the pairwise nature of the observations. This is better illustrated using a scatterplot.

figure_scatter

The scatterplot reveals more clearly the relationship between conditions:
– fast participants, shown in dark blue on the left, tended to be a bit faster in condition 1 than in condition 2;
– slower participants, shown in yellow on the right, tended to be slower in condition 1 than in condition 2;
– this effect seems to be more prominent for participants with responses larger than about 500 ms, with a trend for larger differences with increasing average response latencies.

A shift function can help assess and quantify this pattern. In the shift function below, the x axis shows the deciles in condition 1. The y axis shows the differences between deciles from the two conditions. The difference is reported in the coloured label. The vertical lines show the 95% percentile bootstrap confidence intervals. As we travel from left to right along the x axis, we consider progressively slower participants in condition 1. These slower responses in condition 1 are associated with progressively faster responses in condition 2 (the difference condition 1 – condition 2 increases).

figure_sf_dhd

So here the inferences are made on differences between quantiles of the marginal distributions: for each distribution, we compute quantiles, and then subtract the quantiles.

What if we want to make inferences on the pairwise differences instead? This can be done by computing the quantiles of the differences, and plotting them as a function of the quantiles in one group. A small change in the code gives us a new shift function for dependent groups.

figure_sf_pdhd

The two versions look very similar, which is re-assuring, but does not demonstrate anything (except confirmation bias and wishful thinking on my part). But there might be situations where the two versions differ. Also, the second version makes explicit inferences about the pairwise differences, not about the differences between marginal distributions: so despite the similarities, they afford different conclusions.

Let’s look at the critical example that I should have considered before getting all excited and blogging about the “new method”. A simple negative control demonstrates what is wrong with the approach. Here are two dependent distributions, with a clear shift between the marginals.

figure_kde2

The pairwise relationships are better illustrated using a scatterplot, which shows a seemingly uniform shift between conditions.

figure_scatter2_1

Plotting the pairwise differences as a function of observations in condition 1 confirms the pattern: the differences don’t seem to vary much with the results in condition 1. In other words, differences don’t seem to be particularly larger or smaller for low results in condition 1 relative to high results.

figure_scatter2_2

The shift function on marginals does a great job at capturing the differences, showing a pattern characteristic of stochastic dominance (Speckman, Rouder, Morey & Pratte, 2008): one condition (condition 2) dominates the other at every decile. The differences also appear to be a bit larger for higher than lower deciles in condition 1.

figure_sf_dhd2

The modified shift function, shown next, makes no sense. That’s because the deciles of condition 1 and the deciles of the difference scores necessarily increase from 1 to 9, so plotting one as a function of the other ALWAYS gives a positive slope. The same positive slope I thought was capturing a pattern of regression to the mean! So I fooled myself because I was so eager to find a technique to quantify regression to the mean, and I only used examples that confirmed my expectations (confirmation bias)! This totally blinded me to what in retrospect is a very silly mistake.

figure_sf_pdhd2

Finally, let’s go back to the pattern observed in the previous shift function, where it seemed that the difference scores were increasing from low to high quantiles of condition 1. The presence of this pattern can better be tested using a technique that makes inferences about pairwise differences. One such technique is the difference asymmetry function. The idea from Wilcox (2012, Wilcox & Erceg-Hurn, 2012) goes like this: if two distributions are identical, then the difference scores should be symmetrically distributed around zero. To test for asymmetry, we can estimate sums of lower and higher quantiles; for instance, the sum of quantile 0.05 and quantile 0.95, 0.10 + 0.90, 0.15 + 0.85… For symmetric distributions with a median of zero, these sums should be close to zero, leading to a flat function centred at zero. If for instance the negative differences tend to be larger than the positive differences, the function will start with negative sums and will increase progressively towards zero (see example in Rousselet, Pernet & Wilcox). In our example, the difference asymmetry function is negative and flat, which is characteristic of a uniform shift, without much evidence for an asymmetry. Which is good because that’s how the fake data were generated! So using  graphical representations such as scatterplots, in conjunction with the shift function and the difference asymmetry function, can provide a very detailed and informative account of how two distributions differ.figure_daf2

Conclusion

I got very excited by the new approach because after spending several days thinking about test-retest reliability assessment from a graphical perspective, I thought I had found the perfect tool, as explained in the next post. So the ingredients of my mistake are clear: statistical sloppiness and confirmation bias.

The code for the figures in this post and for the new bogus shift function is available on github. I’ll will not update the rogme package, which implements the otherwise perfectly valid shift functions and difference asymmetry functions.

References

Speckman, P.L., Rouder, J.N., Morey, R.D. & Pratte, M.S. (2008) Delta plots and coherent distribution ordering. Am Stat, 62, 262-266.

Rousselet, G.A., Pernet, C.R. & Wilcox, R.R. (2017) Beyond differences in means: robust graphical methods to compare two groups in neuroscience. The European journal of neuroscience, 46, 1738-1748. [preprint] [reproducibility package]

Wilcox, R.R. (2012) Comparing Two Independent Groups Via a Quantile Generalization of the Wilcoxon-Mann-Whitney Test. Journal of Modern Applied Statistical Methods, 11, 296-302.

Wilcox, R.R. & Erceg-Hurn, D.M. (2012) Comparing two dependent groups via quantiles. J Appl Stat, 39, 2655-2664.

Reaction times and other skewed distributions: problems with the mean and the median (part 4/4)

This is part 4 of a 4 part series. Part 1 is here.

In this post, I look at median bias in a large dataset of reaction times from participants engaged in a lexical decision task. The dataset was described in a previous post.

After removing a few participants who didn’t pay attention to the task (low accuracy or too many very late responses), we’re left with 959 participants to play with. Each participant had between 996 and 1001 trials for each of two conditions, Word and Non-Word.

Here is an illustration of reaction time distributions from 100 randomly sampled participants in the Word condition:

figure_flp_w_100_kde

Same in the Non-Word condition:

figure_flp_nw_100_kde

Skewness tended to be larger in the Word than the Non-Word condition. Based on the standard parametric definition of skewness, that was the case in 80% of participants. If we use a non-parametric estimate instead (mean – median), it was the case in 70% of participants.

If we save the median of every individual distribution, we get the two following group distributions, which display positive skewness:

figure_flp_all_p_median

The same applies to distributions of means:

figure_flp_all_p_mean

So we have to worry about skewness at 2 levels:

  • individual distributions

  • group distributions

Here I’m only going to explore estimation bias as a result of skewness and sample size in individual distributions. From what we learnt in previous posts, we can already make predictions: because skewness tended to be stronger in the Word than in the Non-Word condition, the bias of the median will be stronger in the former than the later for small sample sizes. That is, the median in the Word condition will tend to be more over-estimated than the median in the Non-Word condition. As a consequence, the difference between the median of the Non-Word condition (larger RT) and the median of the Word condition (smaller RT) will tend to be under-estimated. To check this prediction, I estimated bias in every participant using a simulation with 2,000 iterations. I assumed that the full sample was the population, from which we can compute population means and population medians. Because the Non-Word condition is the least skewed, I used it as the reference condition, which always had 200 trials. The Word condition had 10 to 200 trials, with 10 trial increments. In the simulation, single RT were sampled with replacements among the roughly 1,000 trials available per condition and participant, so that each iteration is equivalent to a fake experiment. 

Let’s look at the results for the median. The figure below shows the bias in the long run estimation of the difference between medians (Non-Word – Word), as a function of sample size in the Word condition. The Non-Word condition always had 200 trials. All participants are superimposed and shown as coloured traces. The average across participants is shown as a thicker black line. 

figure_flp_bias_diff_md

As expected, bias tended to be negative with small sample sizes. For the smallest sample size, the average bias was -11 ms. That’s probably substantial enough to seriously distort estimation in some experiments. Also, variability is high, with a 80% highest density interval of [-17.1, -2.6] ms. Bias decreases rapidly with increasing sample size. For n=60, it is only 1 ms.

But inter-participant variability remains high, so we should be cautious interpreting results with large numbers of trials but few participants. To quantify the group uncertainty, we could measure the probability of being wrong, given a level of desired precision, as demonstrated here for instance.

After bootstrap bias correction (with 200 bootstrap resamples), the average bias drops to roughly zero for all sample sizes:

figure_flp_bias_diff_md_bc

Bias correction also reduced inter-participant variability. 

As we saw in the previous post, the sampling distribution of the median is skewed, so the standard measure of bias (taking the mean across simulation iterations) does not provide a good indication of the bias we can expect in a typical experiment. If instead of the mean, we compute the median bias, we get the following results:

figure_flp_mdbias_diff_md

Now, at the smallest sample size, the average bias is only -2 ms, and it drops to near zero for n=20. This result is consistent with the simulations reported in the previous post and confirms that in the typical experiment, the average bias associated with the median is negligible.

What happens with the mean?

figure_flp_bias_diff_m

The average bias of the mean is near zero for all sample sizes. Individual bias values are also much less variable than median values. This difference in bias variability does not reflect a difference in variability among participants for the two estimators of central tendency. In fact, the distributions of differences between Non-Word and Word conditions are very similar for the mean and the median. 

figure_flp_all_p_diff

Estimates of spread are also similar between distributions:

IQR: mean RT = 78; median RT = 79

MAD: mean RT = 57; median RT = 54

VAR: mean RT = 4507; median RT = 4785

This suggests that the inter-participant bias differences are due to the shape differences observed in the first two figures of this post. 

Finally, let’s consider the median bias of the mean.

figure_flp_mdbias_diff_m

For the smallest sample size, the average bias across participants is 7 ms. This positive bias can be explained easily from the simulation results of post 3: because of the larger skewness in the Word condition, the sampling distribution of the mean was more positively skewed for small samples in that condition compared to the Non-Word condition, with the bulk of the bias estimates being negative. As a result, the mean tended to be more under-estimated in the Word condition, leading to larger Non-Word – Word differences in the typical experiment. 

I have done a lot more simulations and was planning even more, using other datasets, but it’s time to move on! Of particular note, it appears that in difficult visual search tasks, skewness can differ dramatically among set size conditions – see for instance data posted here.

Concluding remarks

The data-driven simulations presented here confirm results from our previous simulations:

  • if we use the standard definition of bias, for small sample sizes, mean estimates are not biased, median estimates are biased;
  • however, in the typical experiment (median bias), mean estimates can be more biased than median estimates;

  • bootstrap bias correction can be an effective tool to reduce bias.

Given the large differences in inter-participant variability between the mean and the median, an important question is how to spend your money: more trials or more participants (Rouder & Haaf 2018)? An answer can be obtained by running simulations, either data-driven or assuming generative distributions (for instance exGaussian distributions for RT data). Simulations that take skewness into account are important to estimate bias and power. Assuming normality can have disastrous consequences.

Despite the potential larger bias and bias variability of the median compared to the mean, for skewed distributions I would still use the median as a measure of central tendency, because it provides a more informative description of the typical observations. Large sample sizes will reduce both bias and estimation variability, such that high-precision single-participant estimation should be easy to obtain in many situations involving non-clinical samples. For group estimations, much larger samples than commonly used are probably required to improve the precision of our inferences.

Although the bootstrap bias correction seems to work very well in the long run, for a single experiment there is no guarantee it will get you closer to the truth. One possibility is to report results with and without bias correction. 

For group inferences on the median, traditional techniques use incorrect estimations of the standard error, so consider modern parametric or non-parametric techniques instead (Wilcox & Rousselet, 2018). 

References

Miller, J. (1988) A warning about median reaction time. J Exp Psychol Hum Percept Perform, 14, 539-543.

Rouder, J.N. & Haaf, J.M. (2018) Power, Dominance, and Constraint: A Note on the Appeal of Different Design Traditions. Advances in Methods and Practices in Psychological Science, 1, 19-26.

Wilcox, R.R. & Rousselet, G.A. (2018) A Guide to Robust Statistical Methods in Neuroscience. Curr Protoc Neurosci, 82, 8 42 41-48 42 30.

Bayesian shift function

Two distributions can differ in many ways, yet the standard approach in neuroscience & psychology is to assume differences in means. That’s why the first step in exploratory data analysis should always be detailed graphical representations (Rousselet et al. 2016, 2017). To help quantify how two distributions differ, a fantastic tool is the shift function – an example is provided below. It consists in plotting the difference between group quantiles, as a function of the quantiles in one group. The technique was first described in the 1970s by Doksum (1974; Doksum & Sievers, 1976), and later refined by Wilcox using the Harrell-Davis quantile estimator (Harrell & Davis, 1982) in conjunction with two percentile bootstrap methods (Wilcox 1995; Wilcox et al. 2014). The technique is related to delta plots and relative distribution methods (see details in Rousselet et al. 2017).  

The goal of this post is to get feedback on my first attempt to make a Bayesian version of the shift function. Below I describe three potential strategies. My main motivation for making a Bayesian version is that the shift function comes with frequentist confidence intervals and p values. Although I still use confidence intervals to describe sampling variability, they are inherently linked to p values and tend to be associated with major flaws in interpretation (Morey et al. 2016). And my experience is that if p values are available, most researchers will embark on lazy and irrational decision making (Wagenmakers 2007). P values would not be a problem if they were just used as one of many pieces of evidence, without any special status (McShane et al. 2017).

Let’s consider a toy example: two independent exGaussian distributions, with n = 100 in each group.

figure_data

The two groups clearly differ, as expected from the generative process (see online code). A t-test on means suggests a large uncertainty about the group difference:

difference = – 65 ms [-166, 37]

t = -1.26

p = 0.21

A shift function provides much more information about how the two groups differ.

figure_sf

The x-axis shows the quantiles of group 1 (here only the 9 deciles, which is a good default). The y-axis shows the difference between deciles of group 1 and group 2. Intuitively, the difference shows by how much group 2 would need to be shifted to match group 1, for each decile. The coloured labels show the quantile differences. I let you go back and forth between the density estimates and the shift function to understand what’s going on. Another detailed description is provided here if needed.  

Strategy 1: Bayesian bootstrap

A simple strategy to make a Bayesian shift function is to use the Bayesian bootstrap instead of a percentile bootstrap. The percentile bootstrap can already be considered a very cheap way to create Bayesian posterior distributions, if we make the strong (and wrong) assumption that our observations are the only possible ones. Rasmus Bååth provides a detailed introduction to the Bayesian bootstrap, and it’s R implementation on his blog. There is also a video.

Using the Bayesian bootstrap, and 95% highest density intervals (HDI) of the posterior distributions, the shift function looks very similar to the original version, as expected. 

figure_bbsf

Except now we’re dealing with credible intervals, and there are no p values, so users have to focus on quantification!

Strategy 2: Bayesian quantile regression

Another strategy is to use quantile regression, which comes in a Bayesian flavour using the asymmetric Laplacian likelihood family. To do that, I’m using the amazing brms R package by Paul Bürkner. 

To get started with Bayesian statistics and the brms package, I recommend this excellent blog post by Matti Vuorre. Many other great posts are available here.

Using the default priors from brms, we can fit a quantile regression line for each group and for each decile. The medians (or means, or modes) and 95% HDIs of the posterior distributions can then be used to create a shift function.

figure_bqrsf

Again, the results are quite similar to the original ones, although this time the quantiles do differ a bit from those in the previous versions because we use the medians of the posterior distributions to estimate them. Also, I haven’t looked in much detail at how well the model fits the data. The posterior predictive samples suggest the fits could be improved, but I have too little experience to make a call.

Strategy 3: Bayesian model with exGaussians

The third strategy is to fit a descriptive model to the distributions, generate samples from the posterior distributions, and compute quantiles from these predicted values. Here, since our toy model simulates reaction time data using exGaussian distributions, it makes sense to fit an exGaussian family to the data. More generally, exGaussian distributions are very good at capturing the shape of RT data (Matzke & Wagenmakers, 2009).

figure_bexgsf

Again, the results look similar to the original ones. This strategy has the advantage to require fitting only one model, instead of a new model for each quantile in strategy 2. In addition, we get an exGaussian fit of the data, which is very useful in itself. So that would probably be my favourite strategy.

Questions to you, reader

Does this make even remotely sense? 

Which strategy seems more promising?

The clear benefit of strategies 2 and 3 is that they can easily be extended to create hierarchical shift functions, by fitting simultaneously observations from multiple participants. Whereas for the original shift function using the percentile bootstrap, I don’t see any obvious way to make a hierarchical version.

You might want to ask, why not simply focus on modelling the distributions instead of looking at the quantiles? Modelling the shape of the distributions is great of course, but I don’t think it can achieve the same fine-grained quantification achieved by the shift function. Another approach of course would be to fit a more psychologically motivated model, such as a diffusion model. I think these three approaches are complementary. 

References

Doksum, K. (1974) Empirical Probability Plots and Statistical Inference for Nonlinear Models in the two-Sample Case. Annals of Statistics, 2, 267-277.

Doksum, K.A. & Sievers, G.L. (1976) Plotting with Confidence – Graphical Comparisons of 2 Populations. Biometrika, 63, 421-434.

Harrell, F.E. & Davis, C.E. (1982) A new distribution-free quantile estimator. Biometrika, 69, 635-640.

Blakeley B. McShane, David Gal, Andrew Gelman, Christian Robert, Jennifer L. Tackett (2017) Abandon Statistical Significance. arXiv:1709.07588

Matzke, D. & Wagenmakers, E.J. (2009) Psychological interpretation of the ex-Gaussian and shifted Wald parameters: a diffusion model analysis. Psychon Bull Rev, 16, 798-817.

Morey, R.D., Hoekstra, R., Rouder, J.N., Lee, M.D. & Wagenmakers, E.J. (2016) The fallacy of placing confidence in confidence intervals. Psychon Bull Rev, 23, 103-123.

Rousselet, G.A., Foxe, J.J. & Bolam, J.P. (2016) A few simple steps to improve the description of group results in neuroscience. The European journal of neuroscience, 44, 2647-2651.

Rousselet, G., Pernet, C. & Wilcox, R. (2017) Beyond differences in means: robust graphical methods to compare two groups in neuroscience, figshare.

Wagenmakers, E.J. (2007) A practical solution to the pervasive problems of p values. Psychonomic bulletin & review, 14, 779-804.

Wilcox, R.R. (1995) Comparing Two Independent Groups Via Multiple Quantiles. Journal of the Royal Statistical Society. Series D (The Statistician), 44, 91-99.

Wilcox, R.R., Erceg-Hurn, D.M., Clark, F. & Carlson, M. (2014) Comparing two independent groups via the lower and upper quantiles. J Stat Comput Sim, 84, 1543-1551.

Cohen’s d is biased

The R notebook associated with this post is available on github.

Cohen’s d is a popular measure of effect size. In the one-sample case, d is simply computed as the mean divided by the standard deviation (SD). For repeated measures, the same formula is applied to difference scores (see detailed presentation and explanation of variants in Lakens, 2013). 

Because d relies on a non-robust measure of central tendency (the mean), and a non-robust measure of dispersion (SD), it is a non-robust measure of effect size, meaning that a single observation can have a dramatic effect on its value, as explained here. Cohen’s d also makes very strict assumptions about the data, so it is only appropriate in certain contexts. As a consequence, it should not be used as the default measure of effect size, and more powerful and informative alternatives should be considered – see a few examples here. For comparisons across studies and meta-analyses, nothing will beat data-sharing though.

Here we look at another limitation of Cohen’s d: it is biased when we draw small samples. Bias is covered in detail in another post. In short, in the one-sample case, when Cohen’s d is estimated from a small sample, in the long run it tends to be larger than the population value. This over-estimation is due to a bias of SD, which tends to be lower than the population’s SD. Because the mean is not biased, when divided by an under-estimated SD, it leads to an over-estimated measure of effect size. The bias of SD is explained in intro stat books, in the section describing Student’s t. Not surprisingly it is never mentioned in the discussions of small n studies, as a limitation of effect size estimation…

In this demonstration, we sample with replacement 10,000 times from the ex-Gaussian distributions below, for various sample sizes, as explained here:

figure_miller_distributions

The table below shows the population values for each distribution. For comparison, we also consider a robust equivalent to Cohen’s d, in which the mean is replaced by the median, and SD is replaced by the percentage bend mid-variance (pbvar, Wilcox, 2017). As we will see, this robust alternative is also biased – there is no magic solution I’m afraid.

m:          600  600  600  600  600  600  600  600  600  600  600  600

md:        509  512  524  528  540  544  555  562  572  579  588  594

m-md:    92    88     76    72    60    55     45    38    29    21    12     6

m.den:   301  304  251  255  201  206  151  158  102  112  54     71

md.den: 216  224  180  190  145  157  110  126  76    95    44     68

m.es:       2.0   2.0    2.4   2.4   3.0   2.9   4.0   3.8   5.9   5.4   11.1  8.5

md.es:     2.4   2.3    2.9   2.8   3.7   3.5   5.0   4.5   7.5   6.1   13.3  8.8

m = mean

md = median

den = denominator

es = effect size

m.es = Cohen’s d

md.es = md / pbvar

 

Let’s look at the behaviour of d as a function of skewness and sample size.

figure_es_m_es

Effect size d tends to decrease with increasing skewness, because SD tends to increase with skewness. Effect size also increases with decreasing sample size. This bias is stronger for samples from the least skewed distributions. This is counterintuitive, because one would think estimation tends to get worse with increased skewness. Let’s find out what’s going on.

Computing the bias normalises the effect sizes across skewness levels, revealing large bias differences as a function of skewness. Even with 100 observations, the bias (mean of 10,000 simulation iterations) is still slightly larger than zero for the least skewed distributions. This bias is not due to the mean, because we the sample mean is an unbiased estimator of the population mean.

figure_es_m_es_bias

Let’s check to be sure:

figure_es_m_num

So the problem must be with the denominator:

figure_es_m_den

Unlike the mean, the denominator of Cohen’s d, SD, is biased. Let’s look at bias directly.

figure_es_m_den_bias

SD is most strongly biased for small sample sizes and bias increases with skewness. Negative values indicate that sample SD tends to under-estimate the population values. This is because the sampling distribution of SD is increasingly skewed with increasing skewness and decreasing sample sizes. This can be seen in this plot of the 80% highest density intervals (HDI) for instance:

figure_m_den_hdi80

The sampling distribution of SD is increasingly skewed and variable with increasing skewness and decreasing sample sizes. As a result, the sampling distribution of Cohen’s d is also skewed. The bias is strongest in absolute term for the least skewed distributions because the sample SD is overall smaller for these distributions, resulting in overall larger effect sizes. Although SD is most biased for the most skewed distributions, SD is also overall much larger for them, resulting in much smaller effect sizes than those obtained for less skewed distributions. This strong attenuation of effect sizes with increasing skewness swamps the absolute differences in SD bias. This explains the counter-intuitive lower d bias for more skewed distributions.

As we saw previously, bias can be corrected using a bootstrap approach. Applied, to Cohen’s d, this technique does reduce bias, but it still remains a concern:

figure_es_m_es_bias_after_bc

Finally, let’s look at the behaviour of a robust equivalent to Cohen’s d, the median normalised by the percentage bend mid-variance.

figure_es_md_es

The median effect size shows a similar profile to the mean effect size. It is overall larger than the mean effect size because it uses a robust measure of spread, which is less sensitive to the long right tails of the skewed distributions we sample from.

figure_es_md_bias

The bias disappears quickly with increasing sample sizes, and quicker than for the mean effect size.

However, unlike what we observed for d, in this case the bias correction does not work for small samples, because the repetition of the same observations in some bootstrap samples leads to very large values of the denominator. It’s ok for n>=15, for which bias is relatively small anyway, so at least based on these simulations, I wouldn’t use bias correction for this robust effect size.

figure_es_md_bias_after_bc

Conclusion

Beware of small sample sizes: they are associated with increased variability (see discussion in a clinical context here) and can accentuate the bias of some effect size estimates. If effect sizes tend to be reported more often if they pass some arbitrary threshold, for instance p < 0.05, then the literature will tend to over-estimate them (see demonstration here), a phenomenon exacerbated by small sample sizes (Button et al. 2013). 

Can’t say it enough: small n is bad for science if the goal is to provide accurate estimates of effect sizes.

To determine how the precision and accuracy of your results depend on sample size, the best approach is to perform simulations, providing some assumptions about the shape of the population distributions.

References

Button, K.S., Ioannidis, J.P., Mokrysz, C., Nosek, B.A., Flint, J., Robinson, E.S. & Munafo, M.R. (2013) Power failure: why small sample size undermines the reliability of neuroscience. Nature reviews. Neuroscience, 14, 365-376.

Lakens, D. (2013) Calculating and reporting effect sizes to facilitate cumulative science: a practical primer for t-tests and ANOVAs. Front Psychol, 4, 863.

Wilcox, R.R. (2017) Introduction to Robust Estimation and Hypothesis Testing. Academic Press, 4th edition., San Diego, CA.

Reaction times and other skewed distributions: problems with the mean and the median (part 3/4)

Bias is defined as the distance between the mean of the sampling distribution (here estimated using Monte-Carlo simulations) and the population value. In part 1 and part 2, we saw that for small sample sizes, the sample median provides a biased estimation of the population median, which can significantly affect group comparisons. However, this bias disappears with large sample sizes, and it can be corrected using a bootstrap bias correction. In part 3, we look in more detail at the shape of the sampling distributions, which was ignored by Miller (1988).

Sampling distributions

Let’s consider the sampling distributions of the mean and the median for different sample sizes and ex-Gaussian distributions with skewness ranging from 6 to 92 (Figure 1). When skewness is limited (6, top row), the sampling distributions are symmetric and centred on the population values: there is no bias. As we saw previously, with increasing sample size, variability decreases, which is why studies with larger samples provide more accurate estimations. The flip side is that studies with small samples are much noisier, which is why their results tend not to replicate…

figure_samp_dist_summary

Figure 1

When skewness is large (92, middle row), sampling distributions get more positively skewed with decreasing sample sizes. To better understand how the sampling distributions change with sample size, we turn to the last row of Figure 1, which shows 50% highest-density intervals (HDI). Each horizontal line is a HDI for a particular sample size. The labels contain the values of the interval boundaries. The coloured vertical tick inside the interval marks the median of the distribution. The red vertical line spanning the entire plot is the population value.

Means For small sample sizes, the 50% HDI is offset to the left of the population mean, and so is the median of the sampling distribution. This demonstrates that the typical sample mean tends to under-estimate the population mean – that is to say, the mean sampling distribution is median biased. This offset reduces with increasing sample size, but is still present even for n=100.

Medians With small sample sizes, there is a discrepancy between the 50% HDI, which is shifted to the left of the population median, and the median of the sampling distribution, which is shifted to the right of the population median. This contrasts with the results for the mean, and can be explained by differences in the shapes of the sampling distributions, in particular the larger skewness and kurtosis of the median sampling distribution compared to that of the mean (see code on github for extra figures). The offset between 50% HDI and the population reduces quickly with increasing sample size. For n=10, the median bias is already very small. From n=15, the median sample distribution is not median bias, which means that the typical sample median is not biased.

Another representation of the sampling distributions is provided in Figure 2: 50% HDI are shown as a function of sample size. For both the mean and the median, bias increase with increasing skewness and decreasing sample size. Skewness also increases the asymmetry of the sampling distributions, but more for the mean than median.

figure_samp_dist_hdi_summary

Figure 2

So what’s going on here? Is the mean also biased?  According to the standard definition of bias, which is based on the distance between the population mean and the average of the sampling distribution of the mean, the mean is not biased. But this definition applies to the long run, after we replicate the same experiment many times. In practice, we never do that. So what happens in practice, when we perform only one experiment? In that case, the median of the sampling distribution provides a better description of the typical experiment than the mean of the distribution. And the median of the sample distribution of the mean is inferior to the population mean when sample size is small. So if you conduct one small n experiment and compute the mean of a skewed distribution, you’re likely to under-estimate the true value.

Is the median biased after all? The median is indeed biased according to the standard definition. However, with small n, the typical median (represented by the median of the sampling distribution of the median) is close to the population median, and the difference disappears for even relatively small sample sizes. 

Is it ok to use the median then?

If the goal is to accurately estimate the central tendency of a RT distribution, while protecting against the influence of outliers, the median is far more efficient than the mean (Wilcox & Rousselet, 2018). Providing sample sizes are large enough, bias is actually not a problem, and the typical bias is actually very small, as we’ve just seen. So, if you have to choose between the mean and the median, I would go for the median without hesitation.

It’s more complicated though. In an extensive series of simulations, Ratcliff (1993) demonstrated that when performing standard group ANOVAs, the median can lack power compared to other estimators. Ratcliff’s simulations involved ANOVAs on group means, in which for each participant, very few trials (7 to 12) are available for each condition. Based on the simulations, Ratcliff recommended data transformations or computing the mean after applying specific cut-offs to maximise power. However, these recommendations should be considered with caution because the results could be very different with more realistic sample sizes. Also, standard ANOVA on group means are not robust, and alternative techniques should be considered (Wilcox 2017). Data transformations are not ideal either, because they change the shape of the distributions, which contains important information about the nature of the effects. Also, once data are transformed, inferences are made on the transformed data, not on the original ones, an important caveat that tends to be swept under the carpet in articles’ discussions… Finally, truncating distributions introduce bias too, especially with the mean – see next section (Miller 1991; Ulrich & Miller 1994)!

At this stage, I don’t see much convincing evidence against using the median of RT distributions, if the goal is to use only one measure of location to summarise the entire distribution. Clearly, a better alternative is to not throw away all that information, by studying how entire distributions differ (Rousselet et al. 2017). For instance, explicit modelling of RT distributions can be performed with the excellent brms R package.

Other problems with the mean

In addition to being median biased, and a poor measure of central tendency for asymmetric distributions, the mean is also associated with several other important problems. Standard procedures using the mean lack of power, offer poor control over false positives, and lead to inaccurate confidence intervals. Detailed explanations of these problems are provided in Field & Wilcox (2017) and Wilcox & Rousselet (2018) for instance. For detailed illustrations of the problems associated with means in the one-sample case, when dealing with skewed distributions, see the companion reproducibility package on figshare.

If that was not enough, common outlier exclusion techniques lead to bias estimation of the mean (Miller, 1991). When applied to skewed distributions, removing any values more than 2 or 3 SD from the mean affects slow responses more than fast ones. As a consequence, the sample mean tends to underestimate the population mean. And this bias increases with sample size because the outlier detection technique does not work for small sample sizes, which results from the lack of robustness of the mean and the SD. The bias also increases with skewness. Therefore, when comparing distributions that differ in sample size, or skewness, or both, differences can be masked or created, resulting in inaccurate quantification of effect sizes.

Truncation using absolute thresholds (for instance RT < 300 ms or RT > 1,200 ms) also leads to potentially severe bias of the mean, median, standard deviation and skewness of RT distributions (Ulrich & Miller 1994). The median is much less affected by truncation bias than the mean though.

In the next and final post of this series, we will explore sampling bias in a real dataset, to see how much of a problem we’re really dealing with. Until then, thanks for reading.

[GO TO POST 4/4]

References

Field, A.P. & Wilcox, R.R. (2017) Robust statistical methods: A primer for clinical psychology and experimental psychopathology researchers. Behav Res Ther, 98, 19-38.

Miller, J. (1988) A warning about median reaction time. J Exp Psychol Hum Percept Perform, 14, 539-543.

Miller, J. (1991) Reaction-Time Analysis with Outlier Exclusion – Bias Varies with Sample-Size. Q J Exp Psychol-A, 43, 907-912.

Ratcliff, R. (1993) Methods for dealing with reaction time outliers. Psychol Bull, 114, 510-532.

Rousselet, G.A., Pernet, C.R. & Wilcox, R.R. (2017) Beyond differences in means: robust graphical methods to compare two groups in neuroscience. The European journal of neuroscience, 46, 1738-1748.

Ulrich, R. & Miller, J. (1994) Effects of Truncation on Reaction-Time Analysis. Journal of Experimental Psychology-General, 123, 34-80.

Wilcox, R.R. (2017) Introduction to Robust Estimation and Hypothesis Testing. Academic Press, 4th edition., San Diego, CA.

Wilcox, R.R. & Rousselet, G.A. (2018) A Guide to Robust Statistical Methods in Neuroscience. Curr Protoc Neurosci, 82, 8 42 41-48 42 30.