Mean or median reaction time? An open review of Miller (2020)

Below is a review of Miller (2020), which is a reply to an article in which Rand Wilcox & I reproduced and built upon Miller (1988). Here are the articles in order:

[1] A Warning About Median Reaction Time

[2] Reaction times and other skewed distributions: problems with the mean and the median

[3] Another Warning about Median Reaction Time

I’ve added a link to [3] and this review to article [2], so readers are aware of the discussions.

The review will only make sense if you at least read [3] first, but [2] contains a lot of simulations, descriptions and references not covered in [3].

Review

To start, I’ve got to say I’ve learnt so much about various sources of bias from your work on reaction time analyses, including Miller (1988) and many subsequent papers. I discovered Miller (1988) by chance a few years ago, while researching a review article on robust measures of effect sizes. Actually, I was so startled by the 1988 results that I dropped the article on effect sizes to replicate Miller’s simulations and explore their consequences. This extensive work has taught me a lot about reaction time data, the mean, the median, their sampling distributions and associated inferential statistics. So I’m really thankful for that.

Overall, I enjoyed your reply to our paper, which provides interesting new simulations and a good summary of the issues. The main apparent discrepancies are much smaller than they seem and can easily be addressed by wording key statements and conclusions more carefully, for instance by highlighting boundary conditions. If anything I wrote below is unclear, feel free to contact me directly. In particular, if needed, we could discuss the g&h simulation code, which I realise I probably could have explained better in the R&W2020 article.

Your article is well written. The illustrations are fine but could be improved by adding colours or grey/black contrasts. In Figure 4 (presenting the most original and interesting results), the symbols could be different for the 3 estimators to improve clarity.

The simulation results are convincing and mostly concur with our own assessment. However, what is missing is a consideration of the effects of outliers and the skewness of the distribution of differences (after pooling across trials using the mean, the median or some other estimator). As we explained in R&W2020, outliers and skewness can essentially destroy the power of inferential tests using the mean, whereas tests on the median are hardly affected.

“Another unusual feature of R&W’s simulations is that they used g&h distributions (Hoaglin, 1985) as models of RT. These distributions are quite different from ex-Gaussians and are not normally considered in RT modelling (e.g., Luce, 1986). This distributional choice may also have contributed to the advantage for medians in their simulations.”

I don’t understand how our simulations using g&h distributions could be the source of the discrepancy, because we didn’t use them to simulate distributions of reaction times. In fact, we used them to systematically manipulate the distribution of differences that is fed to a one-sample test, from normal to very skewed. We also varied the probability that outliers are observed. The shape of the marginal RT distributions should be irrelevant to these simulations: when the RT distributions are identical in every condition and participant, irrespective of their skewness, the distribution of differences is normal (that is, for each participant, compute the mean for each condition then subtract the means, leading to a distribution of pairwise differences). When the distributions differ in skewness, the distribution of differences has skewness equal to the difference in skewness between the original distributions. Thus, our simulations are only concerned with the group level analyses, and only with the shapes of the distributions of differences — whether these distributions resulted from individual means or medians was not considered. We also used different one-sample tests for the different estimators of central tendency of the distributions of differences. This is necessary because the mean, the median and trimmed means have different standard errors. So, our assessments of the median are not comparable between simulations.

To be clear: as I understand, in your simulations, different RT distributions from 2 conditions are summarised using the mean and the median, one for each condition and each participant (stage 1); pairwise differences are computed, resulting in distributions of differences (stage 2); one-sample tests on the mean of the differences are performed.

In R&W2020’s g&h simulations, we ignored stage 1. We only considered the shapes of the distributions of differences, and then computed one-sample tests for 3 different estimators of central tendency. The skewness of these distributions depend on both within- and between-participant variability, but we did not model these sources of variability explicitly, only their end-product. Our simulations demonstrate that, given the same distribution of differences, in some situations the median and the 20% trimmed mean have dramatically more power than the mean.

It might well be that in some (many?) situations, RT distributions do not differ enough in skewness to affect the power of the one-sample t-test. But it remains a fundamental statistical result that one-sample t-tests are strongly affected by skewness, and even much more so by outliers. This is also covered in detail in Wilcox & Rousselet (2018).

To address this discrepancy in simulation results, I don’t think new simulations necessarily need to be added, but the problem should be presented more carefully.

Other apparent disagreements can be addressed by more careful phrasing of the situations under which the problems occur, especially at the start of the conclusion section, which I find somewhat misleading.

“R&W concluded that “there seems to be no rationale for preferring the mean over the median as a measure of central tendency for skewed distributions” (p. 37). On the contrary, when performing hypothesis tests to compare the central tendencies of RTs between experimental conditions, the present simulations show that there may be an extremely clear rationale in terms of both Type I error rate and power.”

As we explain in our paper, it is precisely because the mean is a poor measure of central tendency that in some situations it is better at detecting distribution differences (particularly when conditions differ in skewness, and more specifically in their right tails when dealing with RT distributions). But higher power or nominal type I error rate doesn’t make the mean a better measure of central tendency.

What is needed in this section is a clear distinction between 2 different but complementary goals:
[1] to detect differences between distributions;
[2] to understand how distributions differ.

As we argue in our paper, if the goal is [2], then it makes no sense to use the mean or the median; much better tools are available, starting by using the mean and the median, to get a richer perspective. The distinction is clearly made in footnote 1 of your article, but should be reiterated in the conclusion and the abstract, so there is no confusion.

“When comparing conditions with unequal numbers of trials, the sample-size-dependent bias of regular medians can lead to clear inflation of the Type I error rate (Fig. 2), so these medians definitely should not be used.”

This statement is only valid when considering low numbers of trials in the condition with the least trials. So to be clear, the problem emerges only for a combination of absolute and relative numbers of trials. The problem also occurs when group statistics involve means. In contrast, performing for instance a median one-sample test on group differences between medians does not lead to inflated false positives. I realise most users who collapse RT distributions are most likely to perform group statistics using the mean, but this assumption needs to be explicitly stated. The choice of estimator applies to the two levels of analysis: within-participants and between-participants.

For this reason, in the text, it would be worth describing what inferential tests were used for the analyses (I presume t-tests). This should bring a bit of nuance to this statement for instance, given that power depends on choices at 2 levels of analysis:

“The choice of central tendency measure would then be determined primarily by comparing the power of these three measures (i.e., means, medians, bias-corrected medians).”

In R&W2020 we also considered tests on medians, which solve the bias issue. We also looked at trimmed means, and other options are available such as the family of M-estimators.

“In view of the fact that means have demonstrably greater power than bias-corrected medians for experiments with unequal trial frequencies (Fig. 3), it is also sensible to compare power levels in experiments with equal trial frequencies.”

This statement is inaccurate because too general: as we demonstrate in Figure 15 of R&W2020, at least in some situations, the median can have higher power than the mean. Maybe more importantly, in the presence of skewness and outliers, methods using quantiles (with the median as a special case) or trimmed means can have dramatically more power than mean-based methods.

To go back to the distinction between goals [1] and [2], as we argue in R&W2020, ideally we should focus on richer descriptions of distributions using multiple quantiles to understand how they differ (a point you also make in other articles). Limiting analyses to the mean or the median is really unsatisfactory. Also, standard t-tests and ANOVAs do not account for measurement error, including item variability, which should really be handled using hierarchical models. Whether going the quantile way or the hierarchical modelling way (or both, the number of trials required to make sense of the data would make bias issues naturally go away. So personally I see using the mean RT as a valid recommendation, but only in very specific situations.

Your last point in the discussion about choosing a measure of central tendency before seeing the data is an important one. Perhaps more importantly, given the large number of options available to make sense of rich reaction time data, probably the prime recommendation should be that authors make their data publicly available, so that others can try alternative, and in most cases, better techniques.

Other points:

“There is unfortunately no consensus about the psychological meanings of changes in these different parameters (e.g., Rieger & Miller, 2019), but the ex-Gaussian distribution nevertheless remains useful as a way of describing changes in the shapes as well as the means of RT distributions.”

This is an excellent point. It would be worth also to cite Matzke & Wagenmakers (2009) in addition to Rieger & Miller (2019).

I would also mention that other distributions provide better interpretability in terms of shift and scale effects (shifted lognormal) — see Lindelov (2019).

Typos:

“the very between-condition difference” — remove very?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s