Tag Archives: bootstrap

Reaction times and other skewed distributions: problems with the mean and the median (part 2/4)

As we saw in the previous post, the sample median is biased when sampling from skewed distributions. The bias increases with decreasing sample size. According to Miller (1988), because of this bias, group comparison can be affected if the two groups differ in skewness or sample size, or both. As a result, real differences can be lowered or increased, and non-existent differences suggested. In Miller’s own words:

“An important practical consequence of the bias in median reaction time is that sample medians must not be used to compare reaction times across experimental conditions when there are unequal numbers of trials in the conditions.”

Here we will see that this advice is wrong for two reasons.

We assess the problem using a simulation in which we draw samples of same or different sizes from populations that differ in skewness or not. We use the same 12 distributions used by Miller (1988), as described previously.

Group 2 has size 200 and is sampled from the distribution with the least skewness.

Group 1 has size 10 to 200, in increments of 10, and is sampled from the 12 distributions.

After 10,000 iterations, here are the results for the mean:


All the bias values are near zero, as expected. The shaded area shows the upper part of the mean’s bias 50% HDI (highest density interval), when group 1 and group 2 have the same, least, skewness (6). This interval shows the location of the bulk of the 10,000 simulations. The same area is shown in the next figures. Again, this is a reminder that bias is defined in the long run: a single experiment (one of our simulation iterations) can be far off the population value, especially with small sample sizes.

Here are the results for the median:


As in the previous figure, the shaded area shows the upper part of the mean’s bias 50% HDI, when group 1 and group 2 have the same skewness. Bias increases with skewness and sample size difference and is particularly large for n = 10. However, if the two groups have the same skewness (skewness 6), there is almost no bias even when group 2 has 200 observations and group 1 only has 10. So it seems Miller’s (1988) warning about differences in sample sizes was inappropriate, because the main factor causing bias is skewness.

Next, let’s find out if we can correct the bias. Bias correction is performed in 2 ways:

  • using the bootstrap

  • using subsamples, following Miller’s suggestion.

Miller (1988) suggested:

“Although it is computationally quite tedious, there is a way to use medians to reduce the effects of outliers without introducing a bias dependent on sample size. One uses the regular median from Condition F and compares it with a special “average median” (Am) from Condition M. To compute Am, one would take from Condition M all the possible subsamples of Size f where f is the number of trials in Condition F. For each subsample one computes the subsample median. Then, Am is the average, across all possible subsamples, of the subsample medians. This procedure does not introduce bias, because all medians are computed on the basis of the same sample (subsample) size.”

Using all possible subsamples would take far too long. For instance, if one group has 5 observations and the other group has 20 observations, there are 15504 (choose(20,5)) subsamples to consider. Slightly larger sample sizes would force us to consider millions of subsamples. So instead we compute K random subsamples. I arbitrarily set K to 1,000. Although this is not what Miller (1988) suggested, the K loop shortcut should reduce bias to some extent if it is due to sample size differences. Here are the results:


The K loop approach has only a small effect on bias. The reason is simple: the main cause of the bias is not a difference in sample size, it is a difference in skewness. This skewness difference can be handled by the bootstrap. Here is what we get using 200 bootstrap resamples for each simulation iteration:


The bootstrap bias correction works very well. The median’s bias is still a bit larger than the mean’s bias for n = 10 in some conditions. For instance, for the most skewed distribution and n = 10, the mean’s bias is -0.51, whereas the median’s bias after bias correction is 0.73. None of them are exactly zero and the absolute values are very similar. At most, for n = 10, the median’s maximum bias across distributions is 1.85 ms, whereas the mean’s is 0.7 ms.


Miller’s (1988) advice was inappropriate because, when comparing two groups, bias originates from a conjunction of differences in skewness and in sample size, not from differences in sample size alone. Although, in practice it might well be that skewed distributions from different conditions/groups tend to differ in skewness, in which case unequal sample sizes will exacerbate the bias. To be cautious, when sample size is relatively small, it would be useful to report median effects with and without bootstrap bias correction.

Finally, data & code are available on github.


Reaction times and other skewed distributions: problems with the mean and the median (part 1/4)

In this series of 4 posts, I replicate, expand and discuss the results from

Miller, J. (1988) A warning about median reaction time. J Exp Psychol Hum Percept Perform, 14, 539-543.

Part 1 = replicate Miller’s simulations + apply bootstrap bias correction

Part 2 = expand Miller’s simulations to group comparison

Part 3 = problems with the mean

Part 4 = application to a large dataset

Data & code are available on github.

Reaction times (RT) and many other quantities in neuroscience & psychology are skewed. This asymmetry tends to differ among experimental conditions, such that a measure of central tendency and a measure of spread are insufficient to capture how conditions differ. Instead, to understand the potentially rich differences among distributions, it is advised to consider multiple quantiles of the distributions (Doksum 1974; Pratte et al. 2010; Rousselet et al. 2017), or to model the shapes of the distributions (Heathcote et al. 1991; Rouder et al. 2005; Palmer et al. 2011; Matzke et al. 2013). Yet, it is still common practice to summarise reaction time distributions using a single number, most often the mean: that one value for each participant and each condition can then be entered into a group ANOVA to make statistical inferences. Because of the skewness of reaction times, the mean is however a poor measure of central tendency: skewness shifts the mean away from the bulk of the distribution, an effect that can be amplified by the presence of outliers or a thick right tail. For instance, in the figure below, the median better represents the typical observation than the mean because it is closer to the bulky part of the distribution.


Mean and median for a very right skewed distribution. The distribution is bounded to the left and has a long right tail. This is an ex-Gaussian distribution, which is popular to model the shape of reaction time distributions, even though the matching between its parameters and mental processes can be debated (Sternberg, 2014).

So the median appears to be a better choice than the mean if the goal is to have a single value to tell us about the location of most observations in a skewed distribution. The choice between the mean and the median is however more complicated. It could be argued that because the mean is sensitive to skewness, outliers and the thickness of the right tail, it is better able to capture changes in the shapes of the distributions among conditions. But the use of a single value to capture shape differences will lead to intractable analyses because the same mean could correspond to various shapes. Instead, a multiple quantile approach or explicit shape modelling should be used.

The mean and the median differ in another important aspect: for small sample sizes, the sample mean is unbiased, whereas the sample median is biased (see illustrations in previous post). Concretely, if we perform many times the same RT experiment, and for each experiment we compute the mean and the median, the average mean will be very close to the population mean. As the number of experiments increases, the average sample mean will converge to the exact population mean. This is not the case for the median when sample size is small.

The reason for this bias is explain by Miller (1988):

“Like all sample statistics, sample medians vary randomly (from sample to sample) around the true population median, with more random variation when the sample size is smaller. Because medians are determined by ranks rather than magnitudes of scores, the population percentiles of sample medians vary symmetrically around the desired value of 50%. For example, a sample median is just as likely to be the score at the 40th percentile in the population as the score at the 60th percentile. If the original distribution is positively skewed, this symmetry implies that the distribution of sample medians will also be positively skewed. Specifically, unusually large sample medians (e.g., 60th percentile) will be farther above the population median than unusually small sample medians (e.g., 40th percentile) will be below it. The average of all possible sample medians, then, will be larger than the true median, because sample medians less than the true value will not be small enough to balance out the sample medians greater than the true value. Naturally, the more the distribution is skewed, the greater will be the bias in the sample median.”

To illustrate the sample median’s bias, Miller (1988) employed 12 ex-Gaussian distributions that differ in skewness. The distributions are illustrated in the next figure, and colour-coded using the difference between the mean and the median as a non-parametric measure of skewness.


Here are the parameters of the 12 distributions with the associated population parameters.



The first figure in this post used the most skewed distribution of the 12, with parameters (300, 20, 300).

To estimate bias, we run a simulation in which we sample with replacement 10,000 times from each of the 12 distributions. We take random samples of sizes 4, 6, 8, 10, 15, 20, 25, 35, 50 and 100, as did Miller. For each random sample, we compute the mean and the median. For each sample size and ex-Gaussian parameter, the bias is then defined as the difference between the mean of the 10,000 sample estimates and the population value.

First, we check that the mean is not biased:


Each line shows the results for one type of ex-Gaussian distribution: the mean of 10,000 simulations for different sample sizes. The grey area marks the 50% highest-density interval (HDI) of the 10,000 simulations for the least skewed distribution (the same interval is shown in the next two figures for comparison). The interval shows the variability across simulations and highlights an important aspect of the results: bias is a long-run property of an estimator; there is no guarantee that one value from a single experiment will be close to the population value. Also, the variability among samples increases with decreasing sample size, which is why results across small n experiments can differ substantially.

Here are the median bias estimates in table format:



Columns = sample sizes; rows = skewness


The values are very close to the values reported in Miller 1998:


An illustration is easier to grasp:


As reported by Miller (1988), bias can be quite large and it gets worse with decreasing sample sizes and increasing skewness.

Based on these results, Miller made this recommendation:

“An important practical consequence of the bias in median reaction time is that sample medians must not be used to compare reaction times across experimental conditions when there are unequal numbers of trials in the conditions.”

According to Google Scholar, Miller (1988) has been cited 172 times. A look at some of the oldest and most recent citations reveals that his advice has been followed.

For instance, Lavie (1995) noted:

“In the following experiments I used the mean RTs for each participant rather than the medians, as the increase in number of go/no-go errors under the high-load conditions resulted in a different number of responses in between conditions (see Miller, 1988).”

Tipper et al. (1992):

Analysis by Miller (1988) shows that for large numbers of trials, differences in the numbers between conditions […] has no impact on the medians obtained.

More recently, Robinson et al. (2018):

“[…] comparing medians among conditions with an unequal number of trials may lead to false positives (Miller, 1988)”

Du et al. (2017):

“We chose the mean rather than median of RT as […] the sample median may provide a biased estimation of RT (Miller, 1988)”

The list goes on. Also, in a review paper, Whelan (2008), cited 324 times, reiterates the advice:

“The median should never be used on RT data to compare conditions with different numbers of trials.”

However, there are several problems with Miller’s advice. In particular, using the mean leads to many issues with estimation and statistical inferences, as we will see in the 3rd post. In this post, we tackle one key omission from Miller’s assessment: the bias of the sample median can be corrected, using a percentile bootstrap bias correction, as described in this previous post.

For each iteration in the simulation, bias correction was performed using 200 bootstrap samples. Here are the bias corrected results:


The bias correction works very well on average, except for the smallest sample sizes. The failure of the bias correction for very small n is not surprising, because the shape of the sampling distribution cannot be properly estimated by the bootstrap from so few observations. From n = 10, the bias values are very close to those observed for the mean. So it seems that in the long-run, we can eliminate the bias of the sample median by using a simple bootstrap procedure. As we will see in the next post, the bootstrap bias correction is also effective when comparing two groups.

Update: 06 Feb 2018

Finally, let’s have a look at how well bias correction works as a function of sample size and skewness. The smaller the sample size, the less the bootstrap can estimate the sampling distribution and its bias. Ideally, after bias correction, we would expect the sample medians to be centred around zero, with limited variability. This ideal situation could look something like that:


Here we’re considering the most skewed distribution with the smallest sample size. The x-axis shows the median bias before bias correction, whereas the y-axis shows the median bias after bias correction. In this ideal case, the correction works extremely well irrespective of the original bias (average bias is the red vertical line). As a result, the average bias after bias correction is very near zero (horizontal green line).

The reality is very different. The bias correction is only partially effective and is inhomogeneous.


Let’s add some markup to help understand what’s going on.


If the original bias is negative, after correction, the median tends to be even more negative, so over corrected in the wrong direction (lower left triangle).

If the original bias is positive, after correction, the median is either:
– over corrected in the right direction (lower right triangle)
– under corrected in the right direction (middle right triangle)
– over corrected in the wrong direction (upper right triangle)

This pattern remains, although attenuated, if we consider the largest sample size.


Or if we consider the least skewed distribution.


We can look at the different patterns as a function of sample size and skewness.

The figure below shows the probability of over correcting in the wrong direction given that the original bias is negative (lower left triangle of the marked up figure).


In the ideal situation illustrated previously, the expected proportion of over correction in the wrong direction, given an original negative bias, is 6.7%. So here we clearly have an overrepresentation of these cases. When the original bias is negative, in most cases the bootstrap is unable to correct in the right direction. The situation gets worse with increasing skewness and smaller sample sizes.

The figure below shows the probability of under correcting in the right direction given that the original bias is positive (middle right triangle of the marked up figure)


In the ideal situation illustrated previously, the expected proportion of under correction in the right direction, given an original positive bias, is 44.7 %. So here, we have an overrepresentation of these cases. When the original bias is positive, in too many cases the bootstrap corrects in the right direction, but it under-corrects. The situation gets worse with increasing skewness and smaller sample sizes.

Ok, so bias correction is imperfect and it varies a lot, it part depending on whether the sample median fell above or below the unknown population median. Think using the mean is safer? There are several strong arguments to the contrary – more in another post.


Doksum, K. (1974) Empirical Probability Plots and Statistical Inference for Nonlinear Models in the two-Sample Case. Annals of Statistics, 2, 267-277.

Du, Y., Valentini, N.C., Kim, M.J., Whitall, J. & Clark, J.E. (2017) Children and Adults Both Learn Motor Sequences Quickly, But Do So Differently. Frontiers in Psychology, 8.

Heathcote, A., Popiel, S.J. & Mewhort, D.J.K. (1991) Analysis of Response-Time Distributions – an Example Using the Stroop Task. Psychol Bull, 109, 340-347.

Lavie, N. (1995) Perceptual Load as a Necessary Condition for Selective Attention. J Exp Psychol Human, 21, 451-468.

Matzke, D., Love, J., Wiecki, T.V., Brown, S.D., Logan, G.D. & Wagenmakers, E.J. (2013) Release the BEESTS: Bayesian Estimation of Ex-Gaussian STop Signal reaction time distributions. Front Psychol, 4.

Miller, J. (1988) A warning about median reaction time. J Exp Psychol Hum Percept Perform, 14, 539-543.

Palmer, E.M., Horowitz, T.S., Torralba, A. & Wolfe, J.M. (2011) What Are the Shapes of Response Time Distributions in Visual Search? J Exp Psychol Human, 37, 58-71.

Pratte, M.S., Rouder, J.N., Morey, R.D. & Feng, C.N. (2010) Exploring the differences in distributional properties between Stroop and Simon effects using delta plots. Atten Percept Psycho, 72, 2013-2025.

Robinson, M. M., Clevenger, J., & Irwin, D. E. (2018). The action is in the task set, not in the action. Cognitive psychology, 100, 17-42.

Rouder, J.N., Lu, J., Speckman, P., Sun, D.H. & Jiang, Y. (2005) A hierarchical model for estimating response time distributions. Psychon B Rev, 12, 195-223.

Sternberg, S. (2014) Reaction times and the ex-Gaussian distribution: When is it appropriate? Retrieved from http://www.psych.upenn.edu//~saul/

Tipper, S.P., Lortie, C. & Baylis, G.C. (1992) Selective Reaching – Evidence for Action-Centered Attention. J Exp Psychol Human, 18, 891-905.

Whelan, R. (2008) Effective analysis of reaction time data. Psychol Rec, 58, 475-482.


Bias & bootstrap bias correction

The code and a notebook for this post are available on github.

The bootstrap bias correction technique is described in detail in chapter 10 of this classic textbook:

Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap. CRC press.

A mathematical summary + R code are available here.

In a typical experiment, we draw samples from an unknown population and compute a summary quantity, or sample statistic, which we hope will be close to the true population value. For instance, in a reaction time (RT) experiment, we might want to estimate how long it takes for a given participant to detect a visual stimulus embedded in noise. Now, to get a good estimate of our RTs, we ask our participant to perform a certain number of trials, say 100. Then, we might compute the mean RT, to get a value that summarises the 100 trials. This mean RT is an estimate of the true, population, value. In that case the population would be all the unknowable reaction times that could be generated by the participant, in the same task, in various situations – for instance after x hours of sleep, y cups of coffee, z pints of beer the night before – typically all these co-variates that we do not control. (The same logic applies across participants). So for that one experiment, the mean RT might under-estimate or over-estimate the population mean RT. But if we ask our participant to come to the lab over and over, each time to perform 100 trials, the average of all these sample estimates will converge to the population mean. We say that the sample mean is an unbiased estimator of the population mean. Certain estimators, in certain situations, are however biased: no matter how many experiments we perform, the average of the estimates is systematically off, either above or below the population value.

Let’s say we’re sampling from this very skewed distribution. It is an ex-Gaussian distribution which looks a bit like a reaction time distribution, but could be another skewed quantity – there are plenty to choose from in nature. The mean is 600, the median is 509.


Now imagine we perform experiments to try to estimate these population values. Let say we take 1,000 samples of 10 observations. For each experiment (sample), we compute the mean. These sample means are shown as grey vertical lines in the next figure. A lot of them fall very near the population mean (black vertical line), but some of them are way off.


The mean of these estimates is shown with the black dashed vertical line. The difference between the mean of the mean estimates and the population value is called bias. Here bias is small (2.5). Increasing the number of experiments will eventually lead to a bias of zero. In other words, the sample mean is an unbiased estimator of the population mean.

For small sample sizes from skewed distributions, this is not the case for the median. In the example below, the bias is 15.1: the average median across 1,000 experiments over-estimates the population median.


Increasing sample size to 100 reduces the bias to 0.7 and improves the precision of our estimates. On average, we get closer to the population median, and the distribution of sample medians has much lower variance.


So bias and measurement precision depend on sample size. Let’s look at sampling distributions as a function of sample size. First, we consider the mean.

The sample mean is not biased

Using our population, let’s do 1000 experiments, in which we take different numbers of samples 1000, 500, 100, 50, 20, 10.

Then, we can illustrate how close all these experiments get to the true (population) value.


As expected, our estimates of the population mean are less and less variable with increasing sample size, and they converge towards the true population value. For small samples, the typical sample mean tends to underestimate the true population value (yellow curve). But despite the skewness of the sampling distribution with small n, the average of the 1000 simulations/experiments is very close to the population value, for all sample sizes:

  • population = 677.8

  • average sample mean for n=10 = 683.8

  • average sample mean for n=20 = 678.3

  • average sample mean for n=50 = 678.1

  • average sample mean for n=100 = 678.7

  • average sample mean for n=500 = 678.0

  • average sample mean for n=1000 = 678.0

The approximation will get closer to the true value with more experiments. With 10,000 experiments of n=10, we get 677.3. The result is very close to the true value. That’s why we say that the sample mean is an unbiased estimator of the population mean.

It remains that with small n, the sample mean tends to underestimate the population mean.

The median of the sampling distribution of the mean in the previous figure is 656.9, which is 21 ms under the population value. A good reminder that small sample sizes lead to bad estimation.

Bias of the median

The sample median is biased when n is small and we sample from skewed distributions:

Miller, J. (1988) A warning about median reaction time. J Exp Psychol Hum Percept Perform, 14, 539-543.

However, the bias can be corrected using the bootstrap. Let’s first look at the sampling distribution of the median for different sample sizes.


Doesn’t look too bad, but for small sample sizes, on average the sample median over-estimates the population median:

  • population = 508.7

  • sample mean for n=10 = 522.1

  • sample mean for n=20 = 518.4

  • sample mean for n=50 = 511.5

  • sample mean for n=100 = 508.9

  • sample mean for n=500 = 508.7

  • sample mean for n=1000 = 508.7

Unlike what happened with the mean, the approximation does not get closer to the true value with more experiments. Let’s try 10,000 experiments of n=10:

  • sample mean = 523.9

There is systematic shift between average sample estimates and the population value: thus the sample median is a biased estimate of the population median. Fortunately, this bias can be corrected using the bootstrap.

Bias estimation and bias correction

A simple technique to estimate and correct sampling bias is the percentile bootstrap. If we have a sample of n observations:

  • sample with replacement n observations from our original sample

  • compute the estimate

  • perform steps 1 and 2 nboot times

  • compute the mean of the nboot bootstrap estimates

The difference between the estimate computed using the original sample and the mean of the bootstrap estimates is a bootstrap estimate of bias.

Let’s consider one sample of 10 observations from our skewed distribution. It’s median is: 535.9

The population median is 508.7, so our sample considerably over-estimate the population value.

Next, we sample with replacement from our sample, and compute bootstrap estimates of the median. The distribution obtained is a bootstrap estimate of the sampling distribution of the median. The idea is this: if the bootstrap distribution approximates, on average, the shape of the sampling distribution of the median, then we can use the bootstrap distribution to estimate the bias and correct our sample estimate. However, as we’re going to see, this works on average, in the long run. There is no guarantee for a single experiment.


Using the current data, the mean of the bootstrap estimates is  722.8.

Therefore, our estimate of bias is the difference between the mean of the bootstrap estimates and the sample median = 187.

To correct for bias, we subtract the bootstrap bias estimate from the sample estimate:

sample median – (mean of bootstrap estimates – sample median)

which is the same as:

2 x sample median – mean of bootstrap estimates.

Here the bias corrected sample median is 348.6. Quite a drop from 535.9. So the sample bias has been reduced dramatically, clearly too much. But bias is a long run property of an estimator, so let’s look at a few more examples. We take 100 samples of n = 10, and compute a bias correction for each of them. The arrows go from the sample median to the bias corrected sample median. The vertical black line shows the population median.


  • population median =  508.7

  • average sample median =  515.1

  • average bias corrected sample median =  498.8

So across these experiments, the bias correction was too strong.

What happens if we perform 1000 experiments, each with n=10, and compute a bias correction for each one? Now the average of the bias corrected median estimates is much closer to the true median.

  • population median =  508.7

  • average sample median =  522.1

  • average bias corrected sample median =  508.6

It works very well!

But that’s not always the case: it depends on the estimator and on the amount of skewness.

I’ll cover median bias correction in more detail in a future post. For now, if you study skewed distributions (most bounded quantities such as time measurements are skewed) and use the median, you should consider correcting for bias. But it’s unclear in which situation to act: clearly, bias decreases with sample size, but with small sample sizes the bias can be poorly estimated, potentially resulting in catastrophic adjustments. Clearly more simulations are needed…

Trimmed means

The R code for this post is on github.

Trimmed means are robust estimators of central tendency. To compute a trimmed mean, we remove a predetermined amount of observations on each side of a distribution, and average the remaining observations. If you think you’re not familiar with trimmed means, you already know one famous member of this family: the median. Indeed, the median is an extreme trimmed mean, in which all observations are removed except one or two.

Using trimmed means confers two advantages:

  • trimmed means provide a better estimation of the location of the bulk of the observations than the mean when sampling from asymmetric distributions;
  • the standard error of the trimmed mean is less affected by outliers and asymmetry than the mean, so that tests using trimmed means can have more power than tests using the mean.

Important point: if we use a trimmed mean in an inferential test (see below), we make inferences about the population trimmed mean, not the population mean. The same is true for the median or any other measure of central tendency. So each robust estimator is a tool to answer a specific question, and this is why different estimators can return different answers…

Here is how we compute a 20% trimmed mean.

Let’s consider a sample of 20 observations:

39 92 75 61 45 87 59 51 87 12  8 93 74 16 32 39 87 12 47 50

First we sort them:

8 12 12 16 32 39 39 45 47 50 51 59 61 74 75 87 87 87 92 93

The number of observations to remove is floor(0.2 * 20) = 4. So we trim 4 observations from each end:

(8 12 12 16) 32 39 39 45 47 50 51 59 61 74 75 87 (87 87 92 93)

And we take the mean of the remaining observations, such that our 20% trimmed mean = mean(c(32,39,39,45,47,50,51,59,61,74,75,87)) = 54.92

Let’s illustrate the trimming process with a normal distribution and 20% trimming:


We can see how trimming gets rid of the tails of the distribution, to focus on the bulk of the observations. This behaviour is particularly useful when dealing with skewed distributions, as shown here:


In this skewed distribution (it’s an F distribution), there is more variability on the right side, which appears as stretched compared to the left side. Because we trim the same amount on each side, trimming removes a longer chunk of the distribution on the right side than the left side. As a consequence, the mean of the remaining points is more representative of the location of the bulk of the observations. This can be seen in the following examples.


Panel A shows the kernel density estimate of 100 observations sampled from a standard normal distribution (MCT stands for measure of central tendency). By chance, the distribution is not perfectly symmetric, but the mean, 20% trimmed mean and median give very similar estimates, as expected. In panel B, however, the sample is from a lognormal distribution. Because of the asymmetry of the distribution, the mean is dragged towards the right side of the distribution, away from the bulk of the observations. The 20% trimmed mean is to the left of the mean, and the median further to the left, closer to the location of most observations. Thus, for asymmetric distributions, trimmed means provide more accurate information about central tendency than the mean.

**Q: “By trimming, don’t we loose information?”**

I have heard that question over and over. The answer depends on your goal. Statistical methods are only tools to answer specific questions, so it always depends on your goal. I have never met anyone with a true interest in the mean: the mean is always used, implicitly or explicitly, as a tool to indicate the location of the bulk of the observations. Thus, if your goal is to estimate central tendency, then no, trimming doesn’t discard information, it actually increases the quality of the information about central tendency.

I have also heard that criticism: “I’m interested in the tails of the distributions and that’s why I use the mean, trimming gets rid of them”. Tails certainly have interesting stories to tell, but the mean is absolutely not the tool to study them because it mingles all observations into one value, so we have no way to tell why means differ among samples. If you want to study entire distributions, they are fantastic graphical tools available (Rousselet, Pernet & Wilcox 2017).


Base R has trimmed means built in:

mean can be used by changing the trim argument to the desired amount of trimming:

mean(x, trim = 0.2) gives a 20% trimmed mean.

In Matlab, try the tm function available here.

In Python, try the scipy.stats.tmean function. More Python functions are listed here.


There are plenty of R functions using trimmed means on Rand Wilcox’s website.

We can use trimmed means instead of means in t-tests. However, the calculation of the standard error is different from the traditional t-test formula. This is because after trimming observations, the remaining observations are no longer independent. The formula for the adjusted standard error was originally proposed by Karen Yuen in 1974, and it involves winsorization. To winsorize a sample, instead of removing observations, we replace them with the remaining extreme values. So in our example, a 20% winsorized sample is:

32 32 32 32 32 39 39 45 47 50 51 59 61 74 75 87 87 87 87 87

Taking the mean of the winsorized sample gives a winsorized mean; taking the variance of the winsorized sample gives a winsorized variance etc. I’ve never seen anyone using winsorized means, however the winsorized variance is used to compute the standard error of the trimmed mean (Yuen 1974). There is also a full mathematical explanation in Wilcox (2012).

You can use all the functions below to make inferences about means too, by setting tr=0. How much trimming to use is an empirical question, depending on the type of distributions you deal with. By default, all functions set tr=0.2, 20% trimming, which has been studied a lot and seems to provide a good compromise. Most functions will return an error with an alternative function suggestion if you set tr=0.5: the standard error calculation is inaccurate for the median and often the only satisfactory solution is to use a percentile bootstrap.

**Q: “With trimmed means, isn’t there a danger of users trying different amounts of trimming and reporting the one that give them significant results?”**

This is indeed a possibility, but dishonesty is a property of the user, not a property of the tool. In fact, trying different amounts of trimming could be very informative about the nature of the effects. Reporting the different results, along with graphical representations, could help provide a more detailed description of the effects.

The Yuen t-test performs better than the t-test on means in many situations. For even better results, Wilcox recommends to use trimmed means with a percentile-t bootstrap or a percentile bootstrap. With small amounts of trimming, the percentile-t bootstrap performs better; with at least 20% trimming, the percentile bootstrap is preferable. Details about these choices are available for instance in Wilcox (2012) and Wilcox & Rousselet (2017).

Yuen’s approach

1-alpha confidence interval for the trimmed mean: trimci(x,tr=.2,alpha=0.05)

Yuen t-test for 2 independent groups: yuen(x,y,tr=.2)

Yuen t-test for 2 dependent groups: yuend(x,y,tr=.2)

Bootstrap percentile-t method

One group: trimcibt(x,tr=.2,alpha=.05,nboot=599)

Two independent groups: yuenbt(x,y,tr=.2,alpha=.05,nboot=599)

Two dependent groups: ydbt(x,y,tr=.2,alpha=.05,nboot=599)

Percentile bootstrap approach

One group: trimpb(x,tr=.2,alpha=.05,nboot=2000)

Two independent groups: trimpb2(x,y,tr=.2,alpha=.05,nboot=2000)

Two dependent groups: dtrimpb(x,y=NULL,alpha=.05,con=0,est=mean)


There are some Matlab functions here:

tm – trimmed mean

yuen – t-test for 2 independent groups

yuend – t-test for 2 dependent groups

winvar – winsorized variance

winsample – winsorized sample

wincov – winsorized covariance

These functions can be used with several estimators including  trimmed means:

pb2dg – percentile bootstrap for 2 dependent groups

pb2ig– percentile bootstrap for 2 independent groups

pbci– percentile bootstrap for 1 group

Several functions for trimming large arrays and computing confidence intervals are available in the LIMO EEG toolbox.


Karen K. Yuen. The two-sample trimmed t for unequal population variances, Biometrika, Volume 61, Issue 1, 1 April 1974, Pages 165–170, https://doi.org/10.1093/biomet/61.1.165

Rousselet, Guillaume; Pernet, Cyril; Wilcox, Rand (2017): Beyond differences in means: robust graphical methods to compare two groups in neuroscience. figshare. https://doi.org/10.6084/m9.figshare.4055970.v7

Rand R. Wilcox, Guillaume A. Rousselet. A guide to robust statistical methods in neuroscience bioRxiv 151811; doi: https://doi.org/10.1101/151811

Wilcox, R.R. (2012) Introduction to robust estimation and hypothesis testing. Academic Press, San Diego, CA.

How to compare dependent correlations

In this post we’re going to compare two robust dependent correlation coefficients using a frequentist approach. The approach boils down to computing a confidence interval for the difference between correlations. There are several solutions to this problem, and we’re going to focus on what is probably the simplest one, using a percentile bootstrap, as described in Wilcox 2016 & implemented in his R functions twoDcorR() and twoDNOV(). These two functions correspond to two cases:

  • Case 1: overlapping correlations
  • Case 2: non-overlapping correlations

Case 1: overlapping correlations

Case 1 corresponds to the common scenario in which we look for correlations, across participants, between one behavioural measurement and activity in several brain areas. For instance, we could look at correlations between percent correct in one task and brain activity in two regions of interest (e.g. parietal and occipital). In this scenario, often papers report a significant brain-behaviour correlation in one brain area, and a non-significant correlation in another brain area. Stopping the analyses at that stage leads to a common interaction fallacy: because correlation 1 is statistically significant and correlation 2 is not does not mean that the two correlations differ (Nieuwenhuis et al. 2011). The interaction fallacy is also covered in a post by Jan Vanhove. Thom Baguley also provides R code to compare correlations, as well as a cautionary note about using correlations at all.

To compare the two correlation coefficients, we proceed like this:

  • sample participants with replacement
  • compute the two correlation coefficients based on the bootstrap samples
  • save the difference between correlations
  • execute the previous steps at least 500 times
  • use the distribution of bootstrap differences to derive a confidence interval: a 95% confidence interval is defined as the 2.5th and 97.5th quantiles of the bootstrap distribution.

A Matlab script implementing the procedure is on github. To run the code you will need the Robust Correlation Toolbox. First we generate data and illustrate them.


Figure 1

% Then we bootstrap the data
Nb = 500;
bootcorr1 = zeros(Nb,1);
bootcorr2 = zeros(Nb,1);

for B = 1:Nb

bootsample = randi(Np,1,Np);
bootcorr1(B) = Spearman(a(bootsample),b(bootsample),0);
bootcorr2(B) = Spearman(a(bootsample),c(bootsample),0);


% Cyril Pernet pointed out on Twitter that the loop is unnecessary.
% We can compute all bootstrap samples in one go:
% bootsamples = randi(Np,Np,Nb);
% bc1 = Spearman(a(bootsamples),b(bootsamples),0);
% bc2 = Spearman(a(bootsamples),c(bootsamples),0);
% The bootstrap loop does make the bootstrap procedure more intuitive
% for new users, especially if they are also learning R or Matlab!

In the example above we used Spearman’s correlation, which is robust to univariate outliers (Pernet, Wilcox & Rousselet, 2012). To apply the technique to Pearson’s correlation, the boundaries of the confidence interval need to be adjusted, as described in Wilcox (2009). However, Pearson’s correlation is not robust so it should be used cautiously (Rousselet & Pernet 2012). Also, as described in Wilcox (2009), Fisher’s z test to compare correlation coefficients is inappropriate.

Confidence intervals are obtained like this:

alpha = 0.05; % probability coverage - 0.05 for 95% CI

hi = floor((1-alpha/2)*Nb+.5);
lo = floor((alpha/2)*Nb+.5);

% for each correlation
boot1sort = sort(bootcorr1);
boot2sort = sort(bootcorr2);
boot1ci = [boot1sort(lo) boot1sort(hi)]; 
boot2ci = [boot2sort(lo) boot2sort(hi)]; 

% for the difference between correlations
bootdiff = bootcorr1 - bootcorr2;
bootdiffsort = sort(bootdiff);
diffci = [bootdiffsort(lo) bootdiffsort(hi)];

We get:

corr(a,b) = 0.52 [0.34 0.66]
corr(a,c) = 0.79 [0.68 0.86]
difference = -0.27 [-0.44 -0.14]

The bootstrap distribution of the differences between correlation coefficients is illustrated below.


Figure 2

The bootstrap distribution does not overlap with zero, our null hypothesis. In that case the p value is exactly zero, which is calculated like this:

pvalue = mean(bootdiffsort < 0);
pvalue = 2*min(pvalue,1-pvalue);

The original difference between coefficients is marked by a thick vertical black line. The 95% percentile bootstrap confidence interval is illustrated by the two thin vertical black lines.

Case 2: non-overlapping correlations

Case 2 corresponds to a before-after scenario. For instance the same participants are tested before and after an intervention, such as a training procedure. On each occasion, we compute a correlation, say between brain activity and behaviour, and we want to know if that correlation changes following the intervention.

This case 2 is addressed using a straightforward modification of case 1. Here are example data:


Figure 3

The bootstrap is done like this:

Nb = 500;
bootcorr1 = zeros(Nb,1);
bootcorr2 = zeros(Nb,1);

for B = 1:Nb

bootsample = randi(Np,1,Np);
bootcorr1(B) = Spearman(a1(bootsample),b1(bootsample),0);
bootcorr2(B) = Spearman(a2(bootsample),b2(bootsample),0);


alpha = 0.05; % probability coverage - 0.05 for 95% CI
hi = floor((1-alpha/2)*Nb+.5);
lo = floor((alpha/2)*Nb+.5);

% for each correlation
boot1sort = sort(bootcorr1);
boot2sort = sort(bootcorr2);
boot1ci = [boot1sort(lo) boot1sort(hi)]; 
boot2ci = [boot2sort(lo) boot2sort(hi)]; 

% for the difference between correlations
bootdiff = bootcorr1 - bootcorr2;
bootdiffsort = sort(bootdiff);
diffci = [bootdiffsort(lo) bootdiffsort(hi)];

We get:

corr(a1,b1) = 0.52 [0.34 0.66]
corr(a2,b2) = 0.56 [0.39 0.68]
difference = -0.04 [-0.24 0.17]

The difference is very close to zero and its confidence interval includes zero. So the training procedure is associated with a very weak change in correlation.

Instead of a confidence interval, we could also report a highest density interval, which will be very close to the confidence interval if the bootstrap distribution is symmetric – the Matlab script on github shows how to compute a HDI. We could also simply report the difference and its bootstrap distribution. This provides a good summary of the uncertainty we have about the difference, without committing to a binary description of the results as significant or not.


Figure 4


The strategies described here have been validated for Spearman’s correlation and the Winzorised correlation (Wilcox, 2016). The skipped correlation led to too conservative confidence intervals, meaning that in simulations, the 95% confidence intervals contained the true value more than 95% of the times. This illustrates an important idea: the behaviour of a confidence interval is always estimated in the long run, using simulations, and it results from the conjunction of an estimator and a technique to form the confidence interval. Finally, a very similar bootstrap approach can be used to compare regression coefficients (Wilcox 2012), for instance to compare the slopes of robust linear regressions in an overlapping case (Bieniek et al. 2013).


Bieniek, M.M., Frei, L.S. & Rousselet, G.A. (2013) Early ERPs to faces: aging, luminance, and individual differences. Frontiers in psychology, 4, 268.

Nieuwenhuis, S., Forstmann, B.U. & Wagenmakers, E.J. (2011) Erroneous analyses of interactions in neuroscience: a problem of significance. Nat Neurosci, 14, 1105-1107.

Pernet, C.R., Wilcox, R. & Rousselet, G.A. (2012) Robust correlation analyses: false positive and power validation using a new open source matlab toolbox. Front Psychol, 3, 606.

Wilcox, R.R. (2009) Comparing Pearson Correlations: Dealing with Heteroscedasticity and Nonnormality. Communications in Statistics-Simulation and Computation, 38, 2220-2234.

Wilcox, R.R. (2012) Introduction to robust estimation and hypothesis testing. Academic Press, San Diego, CA.

Wilcox, R.R. (2016) Comparing dependent robust correlations. Brit J Math Stat Psy, 69, 215-224.

How to illustrate a 2×2 mixed ERP design

Let’s consider a simple mixed ERP design with 2 repeated measures (2 tasks) and 2 independent groups of participants (young and older participants). The Matlab code and the data are available on github. The data are time-courses of mutual information, with one vector time-course per participant and task. These results are preliminary and have not been published yet, but you can get an idea of how we use mutual information in the lab in recent publications (Ince et al. 2016a, 2016b; Rousselet et al. 2014). The code and illustrations presented in the rest of the post are not specific to mutual information.

Our 2 x 2 experimental design could be analysed using the LIMO EEG toolbox for instance, by computing a 2 x 2 ANOVA at every time point, and correcting for multiple comparisons using cluster based bootstrap statistics (Pernet et al. 2011, 2015). LIMO EEG has been used to investigate task effects for instance (Rousselet et al. 2011). But here, instead of ANOVAs, I’d like to focus on graphical representations and non-parametric assessment of our simple group design, to focus on effect sizes and to demonstrate how a few figures can tell a rich data-driven story.

First, we illustrate the 4 cells of our design. Figure 1 shows separately each group and each task: in each cell all participants are superimposed using thin coloured lines. We can immediately see large differences among participants and between groups, with overall smaller effects (mutual information) in older participants. There also seems to be task differences, in particular in young participants, which tend to present more sustained effects past 200 ms in the expressive task than the gender task.


Figure 1

To complement the individual traces, we can add measures of central tendency. The mean is shown with a thick green line, the median with a thick black line. See how the mean can be biased compared to the median in the presence of extreme values. The median was calculated using the Harrell-Davis estimator of the 50th quantile. To illustrate the group median with a measure of uncertainty, we can add a 95% percentile bootstrap confidence interval for instance (Figure 2).


We can immediately see discrepancies between the median time-courses and their confidence intervals on the one hand, and the individual time-courses on the other hand. There are indeed many distributions of participants that can lead to the same average time-course. That’s why it is essential to show individual results, at least in some illustrations.

In our 2 x 2 design, we now have 3 aspects to consider: group differences, task differences and their interactions. We illustrate them in turn.

Age group differences for each task

We can look at the group differences in each task separately, as shown in Figure 3. The medians of each group is shown with 95% percentile bootstrap confidence intervals. On average, older participants tend to have weaker mutual information than young participants – less than half around 100-200 ms post-stimulus. This will need to be better quantified, for instance by reporting the median of all pairwise differences.


Figure 3

Under each panel showing the median + CI for each group, we plot the time-course of the group differences (young-older), with a confidence interval. For group comparisons we cannot illustrate individuals, because participants are be paired. However, we can illustrate all the bootstrap samples, shown in grey. Each sample was obtained by:

  • sampling with replacement Ny observations among Ny young observers
  • sampling with replacement No observations among No older observers
  • compute the median of each group
  • subtract the two medians

It is particularly important to illustrate the bootstrap distributions if they are skewed or contain outliers, or both, to check that the confidence intervals provide a good summary. If the bootstrap samples are very skewed, highest density intervals might be a good alternative to classic confidence intervals.

The lower panels of Figure 3 reveal relatively large group differences in a narrow window within 200 ms. The effect also appears to be stronger in the expressive task. Technically, one could also say that the effects are statistically significant, in a frequentist sense, when the 95% confidence intervals do not include zero. But not much is gained from that because some effects are large and others are small. Correction for multiple comparisons would also be required.

Task differences for each group

Figure 4 has a similar layout to Figure 3, now focusing on the task differences. The top panels suggest that the group medians don’t differ much between tasks, except maybe in young participants around 300-500 ms.


Figure 4

Because task effects are paired, we are not limited to the comparison of the medians between tasks; we can also illustrate the individual task differences and the medians of these differences [1]. These are shown in the bottom panels of Figure 4. In both groups, the individual differences are large and the time-courses of the task differences are scattered around zero, except in the young group starting around 300 ms, where most participants have positive differences (expressive > gender).

[1] When the mean is used as a measure of central tendency, these two perspectives are identical, because the difference between two means is the same as the mean of the pairwise differences. However, this is not the case for the median: the difference between medians is not the same as the median of the differences. Because we are interested in effect sizes, it is more informative to report descriptive statistics of the pairwise differences. The advantage of the Matlab code provided with this post is that instead of looking at the median, we can also look at other quantiles, thus getting a better picture of the strength of the effects.

Interaction between tasks and groups

Finally, in Figure 5 we consider the interactions between task and group factors. To do that we first superimpose the medians of the task differences with their confidence intervals (top panel). These traces are the same shown in the bottom panels of Figure 4. I can’t say I’m very happy with the top panel of Figure 5 because the two traces are difficult to compare. Essentially the don’t seem to differ much, except maybe for the late effect in young participants being higher than what is observed in older participants.


In the lower panel of Figure 5 we illustrate the age group differences (young – older) between the medians of the pairwise task differences. Again confidence intervals are also provided, along with the original bootstrap samples. Overall, there is very little evidence for a 2 x 2 interaction, suggesting that the age group differences are fairly stable across tasks. Put another way, the weak task effects don’t appear to change much in the two age groups.


Ince, R.A., Jaworska, K., Gross, J., Panzeri, S., van Rijsbergen, N.J., Rousselet, G.A. & Schyns, P.G. (2016a) The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres. Cereb Cortex.

Ince, R.A., Giordano, B.L., Kayser, C., Rousselet, G.A., Gross, J. & Schyns, P.G. (2016b) A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula. Hum Brain Mapp.

Pernet, C.R., Chauveau, N., Gaspar, C. & Rousselet, G.A. (2011) LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data. Comput Intell Neurosci, 2011, 831409.

Pernet, C.R., Latinus, M., Nichols, T.E. & Rousselet, G.A. (2015) Cluster-based computational methods for mass univariate analyses of event-related brain potentials/fields: A simulation study. Journal of neuroscience methods, 250, 85-93.

Rousselet, G.A., Gaspar, C.M., Wieczorek, K.P. & Pernet, C.R. (2011) Modeling Single-Trial ERP Reveals Modulation of Bottom-Up Face Visual Processing by Top-Down Task Constraints (in Some Subjects). Front Psychol, 2, 137.

Rousselet, G.A., Ince, R.A., van Rijsbergen, N.J. & Schyns, P.G. (2014) Eye coding mechanisms in early human face event-related potentials. J Vis, 14, 7.

Matlab code for the shift function: a powerful tool to compare two entire marginal distributions

Recently, I presented R code for the shift function, a powerful tool to compare two entire marginal distributions.

The Matlab code is now available on github.

shifthd has the same name as its R version, which was originally programmed by Rand Wilcox and first documented in 1995 (see details ). It computes a shift function for independent groups, using a percentile bootstrap estimation of the SE of the quantiles to compute confidence intervals.

shiftdhd is the version for dependent groups.

More recently, Wilcox introduced a new version of the shift function in which a straightforward percentile bootstrap is used to compute the confidence intervals, without estimation of the SE of the quantiles. This is implemented in Matlab as shifthd_pbci for independent groups (equivalent to qcomhd in R); as shiftdhd_pbci for dependent groups (equivalent to Dqcomhd in R).

A demo file shift_function_demo is available here, along with the function shift_fig and dependencies cmu and UnivarScatter.

For instance, if we use the ozone data covered in the previous shift function post, a call to shifthd looks like this:

[xd, yd, delta, deltaCI] = shifthd(control,ozone,200,1);

producing this figure:


The output of shifthd, or any of the other 3 sf functions, can be used as input into shift_fig:

shift_fig(xd, yd, delta, deltaCI,control,ozone,1,5);

producing this figure:


This is obviously work in progress, and shift_fig is meant as a starting point.

Have fun exploring how your distributions differ!

And if you have any question, don’t hesitate to get in touch.