Monthly Archives: January 2018

Can someone tell if a person is alive or dead based on a photograph?

In this post I review the now retracted paper:

Delorme A, Pierce A, Michel L and Radin D (2016) Prediction of Mortality Based on Facial Characteristics. Front. Hum. Neurosci. 10:173. doi: 10.3389/fnhum.2016.00173

In typical Frontiers’ style, the reason for the retraction is obscure.

In December 2016, I made negative comments about the paper on Twitter. Arnaud Delorme (first author, and whom I’ve known for over 20 years), got in touch, asking for clarifications about my points. I said I would write something eventually, so there it goes.

The story is simple: some individuals claim to be able to determine if a person is alive or dead based on a photograph. The authors got hold of 12 such individuals and asked them to perform a dead/alive/don’t know discrimination task. EEG was measured while participants viewed 394 photos of individuals alive or dead (50/50).

Here are some of the methodological problems.


Participants were from California. Some photographs were of US politicians outside California. Participants did not recognise any individuals from the photographs, but unconscious familiarity could still influence behaviour and EEG – who knows?

More importantly, if participants make general claims about their abilities, why not use photographs of individuals from another country altogether? Even better, another culture?


The average group performance of the participants was 53.6%. So as a group, they really can’t do the task. (If you want to argue they can, I challenge you to seek treatment from a surgeon with  a 53.6% success record.) Yet, a t-test is reported with p=0.005. Let’s not pay too much attention to the inappropriateness of t-tests for percent correct data. The crucial point is that the participants did not make a claim about their performance as a group: each one of them claimed to be able to tease apart the dead from the living based on photographs. So participants should be assessed individually. Here are the individual performances:

(52.3, 56.7, 53.3, 56.0, 56.6, 51.8, 61.3, 55.3, 50.0, 51.6, 49.5, 49.4)

5 participants have results flagged as significant. One in particular has a performance of 61.3% correct. So how does it compare to participants without super abilities? Well, astonishingly, there is no control group! (Yep, and that paper was peer-reviewed.)

Given the extra-ordinary claims made by the participants, I would have expected a large sample of control participants, to clearly demonstrate that the “readers” perform well beyond normal. I would also have expected the readers to be tested on multiple occasions, to demonstrate the reliability of the effect.

There are two other problems with the behavioural performance:

  • participants’ responses were biased towards the ‘dead’ response, so a sensitivity analysis, such as a d’ or a non-parametric equivalent should have been used.

  • performance varied a lot across the 3 sets of images that composed the 394 photographs. This suggests that the results are not image independent, which could in part be due to the 3 sets containing different proportions of dead and alive persons.


The ERP analyses were performed at the group level using a 2 x 2 design: alive/dead x correct/incorrect classification. One effect is reported with p<0.05: a larger amplitude for incorrect than correct around 100 ms post-stimulus, only for pictures of dead persons. A proper spatial-temporal cluster correction for multiple comparison was applied. There is no clear interpretation of the effect in the paper, except a quick suggestion that it could be due to low-level image properties or an attention effect. A non-specific attention effect is possible, because sorting ERPs based on behavioural performance can be misleading, as explained here. The effect could also be a false positive – in the absence of replication and comparison to a control group, it’s impossible to tell.

To be frank, I don’t understand why EEG was measured at all. I guess if the self-proclaimed readers could do the task at all, it would be interesting to look at the time-course of the brain activity related to the task. But the literature on face recognition shows very little modulation due to identity, except in priming tasks or using SSVEP protocols – so not likely to show anything with single image presentations. If there was something to exploit, the analysis should be performed at the participant level, perhaps using multivariate logistic regression, with cross-validation, to demonstrate a link between brain activity and image type. Similarly to behaviour, each individual result from the “readers” should be compared to a large set of control results, from participants who cannot perform the behavioural task.


In conclusion, this paper should never have been sent for peer-review. That would have saved everyone involved a lot of time. There is nothing in the paper supporting the authors’ conclusion:

“Our results support claims of individuals who report that some as-yet unknown features of the face predict mortality. The results are also compatible with claims about clairvoyance warrants further investigation.”

If the authors are serious about studying clairvoyance, they should embark on a much more ambitious study. To save time and money, I would suggest to drop EEG from the study, to focus on the creation of a large bank of images from various countries and cultures and repeated measurements of readers and many controls.


What can we learn from 10,000 experiments?

The code and a notebook for this post are available on github.

Before we conduct an experiment, we decide on a number of trials per condition, and a number of participants, and then we hope that whatever we measure comes close to the population values we’re trying to estimate. In this post I’ll use a large dataset to illustrate how close we can get to the truth – at least in a lexical decision task. The data are from the French lexicon project:

Ferrand, L., New, B., Brysbaert, M., Keuleers, E., Bonin, P., Meot, A., Augustinova, M. & Pallier, C. (2010) The French Lexicon Project: lexical decision data for 38,840 French words and 38,840 pseudowords. Behav Res Methods, 42, 488-496.

After discarding participants who clearly did not pay attention, we have 967 participants who performed a word/non-word discrimination task. Each participant has about 1000 trials per condition. I only consider the reaction time (RT) data, but correct/incorrect data are also available. Here are RT distributions for 3 random participants (PXXX refers to their ID number in the dataset):




The distributions are positively skewed, as expected for RT data, and participants tend to be slower in the non-word condition compared to the word condition. Usually, a single number is used to summarise each individual RT distribution. From 1000 values to 1, that’s some serious data compression. (For alternative strategies to compare RT distributions, see for instance this paper). In psychology, the mean is often used, but here the median gives a better indication of the location of the typical observation. Let’s use both. Here is the distribution across participants of median RT for the word and non-word conditions:


And the distribution of median differences:


Interestingly, the differences between the medians of the two conditions is also skewed: that’s because the two distributions tend to differ in skewness.

We can do the same for the mean:


Mean differences:


With this large dataset, we can play a very useful game. Let’s pretend that the full dataset is our population that we’re trying to estimate. Across all trials and all participants, the population medians are:

Word = 679.5 ms
Non-word = 764 ms
Difference = 78.5

the population means are:

Word = 767.6 ms
Non-word = 853.2 ms
Difference = 85.5

Now, we can perform experiments by sampling with replacement from our large population. I think we can all agree that a typical experiment does not have 1,000 trials per condition, and certainly does not have almost 1,000 participants! So what happens if we perform experiments in which we collect smaller sample sizes? How close can we get to the truth?

10,000 experiments: random samples of participants only

In the first simulation, we use all the trials (1,000) available for each participant but vary how many participants we sample from 10 to 100, in steps of 10. For each sample size, we draw 10,000 random samples of participants from the population, and we compute the mean and the median. For consistency, we compute group means of individual means, and group medians of individual medians. We have 6 sets of results to consider: for the word condition, the non-word condition, their difference; for the mean and for the median. Here are the results for the group medians in the word condition. The vertical red line marks the population median for the word condition.


With increasing sample size, we gain in precision: on average the result of each experiment is closer to the population value. With a small sample size, the result of a single experiment can be quite far from the population. This is not surprising, and also explains why estimates from small n experiments tend to disagree, especially between an original study and subsequent replications.

To get a clearer sense of how close the estimates are to the population value, it is useful to consider highest density intervals (HDI). A HDI is the shorted interval that contains a certain proportion of observations: it shows the location of the bulk of the observations. Here we consider 50% HDI:


In keeping with the previous figure, the intervals get smaller with increasing sample size. The intervals are also asymmetric: the right side is always closer to the population value than the left side. That’s because the sampling distributions are asymmetric. This means that the typical experiment will tend to under-estimate the population value.

We observe a similar pattern for the non-word condition, with the addition of two peaks in the distribution from n = 40. I’m not sure what’s causing it, or if it is a common shape. One could check by analysing the lexicon project data from other countries. Any volunteers? One thing for sure: the two peaks are caused by the median, because they are absent from the mean distributions (see notebook on github).


Finally, we can consider the distributions of group medians of median differences:


Not surprisingly, it has a similar shape to the previous distributions. It shows a landscape of expected effects. It would be interesting to see where the results of other, smaller, experiments fall in that distribution. The distribution could also be used to plan experiments to achieve a certain level of precision. This is rather unusual given the common obsession for statistical power. But experiments should be predominantly about quantifying effects, so a legitimate concern is to determine how far we’re likely to be from the truth in a given experiment. Using our simulated distribution of experiments, we can determine the probability of the absolute difference between an experiment and the population value to be larger than some cut-offs. In the figure below, we look at cut-offs from 5 ms to 50 ms, in steps of 5.


The probability of being wrong by at least 5 ms is more than 50% for all sample sizes. But 5 ms is probably not a difference folks studying lexical decision would worry about. It might be an important difference in other fields. 10 ms might be more relevant: I certainly remember a conference in which group differences of 10 ms between attention conditions were seriously discussed, and their implications for theories of attention considered. If we care about a resolution of at least 10 ms, n=30 still gives us 44% chance of being wrong…

(If someone is looking for a cool exercise: you could determine the probability of two random experiments to differ by at least certain amounts.)

We can also look at the 50% HDI of the median differences:


As previously noted, the intervals shrink with increasing sample size, but are asymmetric, suggesting that the typical experiment will under-estimate the population difference. This asymmetry disappears for larger samples of group means:


Anyway, these last figures really make me think that we shouldn’t make such a big deal out of any single experiment, given the uncertainty in our measurements.

10,000 experiments: random samples of trials and participants

In the previous simulation, we sampled participants with replacement, using all their 1,000 trials. Typical experiments use fewer trials. Let say I plan to collect data from 20 participants, with 100 trials per condition. What does the sampling distribution look like? Using our large dataset, we can sample trials and participants to find out. Again, for consistency, group means are computed from individual means, and group medians are computed from individual medians. Here are the sampling distributions of the mean and the median in the word condition:


The vertical lines mark the population values. The means are larger than the medians because of the positive skewness of RT distributions.

In the non-word condition:


And the difference distributions:


Both distributions are slightly positively skewed, and their medians fall to the left of the population values. The spread of expected values is quite large.

Similarly to simulation 1, we can determine the probability of being wrong by at least 5 ms, which is 39%. For 10 ms, the probability is 28%, and for 20 ms 14%. But that’s assuming 20 participants and 100 trials per condition. A larger simulation could investigate many combinations of sample sizes…

Finally, we consider the 50% HDI, which are both asymmetric with respect to the population values and with similar lengths:



To answer the question from the title, I think the main things we can learn from the simulated sampling distributions above are:
– to embrace uncertainty
– modesty in our interpretations
– healthy skepticism about published research
One way to highlight uncertainty in published research would be to run simulations using descriptive statistics from the articles, to illustrate the range of potential outcomes that are compatible with the results. That would make for interesting stories I’m sure.

Bias & bootstrap bias correction

The code and a notebook for this post are available on github.

The bootstrap bias correction technique is described in detail in chapter 10 of this classic textbook:

Efron, B., & Tibshirani, R. J. (1994). An introduction to the bootstrap. CRC press.

A mathematical summary + R code are available here.

In a typical experiment, we draw samples from an unknown population and compute a summary quantity, or sample statistic, which we hope will be close to the true population value. For instance, in a reaction time (RT) experiment, we might want to estimate how long it takes for a given participant to detect a visual stimulus embedded in noise. Now, to get a good estimate of our RTs, we ask our participant to perform a certain number of trials, say 100. Then, we might compute the mean RT, to get a value that summarises the 100 trials. This mean RT is an estimate of the true, population, value. In that case the population would be all the unknowable reaction times that could be generated by the participant, in the same task, in various situations – for instance after x hours of sleep, y cups of coffee, z pints of beer the night before – typically all these co-variates that we do not control. (The same logic applies across participants). So for that one experiment, the mean RT might under-estimate or over-estimate the population mean RT. But if we ask our participant to come to the lab over and over, each time to perform 100 trials, the average of all these sample estimates will converge to the population mean. We say that the sample mean is an unbiased estimator of the population mean. Certain estimators, in certain situations, are however biased: no matter how many experiments we perform, the average of the estimates is systematically off, either above or below the population value.

Let’s say we’re sampling from this very skewed distribution. It is an ex-Gaussian distribution which looks a bit like a reaction time distribution, but could be another skewed quantity – there are plenty to choose from in nature. The mean is 600, the median is 509.

Now imagine we perform experiments to try to estimate these population values. Let say we take 1,000 samples of 10 observations. For each experiment (sample), we compute the mean. These sample means are shown as grey vertical lines in the next figure. A lot of them fall very near the population mean (black vertical line), but some of them are way off.


The mean of these estimates is shown with the black dashed vertical line. The difference between the mean of the mean estimates and the population value is called bias. Here bias is small (2.5). Increasing the number of experiments will eventually lead to a bias of zero. In other words, the sample mean is an unbiased estimator of the population mean.

For small sample sizes from skewed distributions, this is not the case for the median. In the example below, the bias is 15.1: the average median across 1,000 experiments over-estimates the population median.


Increasing sample size to 100 reduces the bias to 0.7 and improves the precision of our estimates. On average, we get closer to the population median, and the distribution of sample medians has much lower variance.


So bias and measurement precision depend on sample size. Let’s look at sampling distributions as a function of sample size. First, we consider the mean.

The sample mean is not biased

Using our population, let’s do 1000 experiments, in which we take different numbers of samples 1000, 500, 100, 50, 20, 10.

Then, we can illustrate how close all these experiments get to the true (population) value.


As expected, our estimates of the population mean are less and less variable with increasing sample size, and they converge towards the true population value. For small samples, the typical sample mean tends to underestimate the true population value (yellow curve). But despite the skewness of the sampling distribution with small n, the average of the 1000 simulations/experiments is very close to the population value, for all sample sizes:

  • population = 677.8

  • average sample mean for n=10 = 683.8

  • average sample mean for n=20 = 678.3

  • average sample mean for n=50 = 678.1

  • average sample mean for n=100 = 678.7

  • average sample mean for n=500 = 678.0

  • average sample mean for n=1000 = 678.0

The approximation will get closer to the true value with more experiments. With 10,000 experiments of n=10, we get 677.3. The result is very close to the true value. That’s why we say that the sample mean is an unbiased estimator of the population mean.

It remains that with small n, the sample mean tends to underestimate the population mean.

The median of the sampling distribution of the mean in the previous figure is 656.9, which is 21 ms under the population value. A good reminder that small sample sizes lead to bad estimation.

Bias of the median

The sample median is biased when n is small and we sample from skewed distributions:

Miller, J. (1988) A warning about median reaction time. J Exp Psychol Hum Percept Perform, 14, 539-543.

However, the bias can be corrected using the bootstrap. Let’s first look at the sampling distribution of the median for different sample sizes.


Doesn’t look too bad, but for small sample sizes, on average the sample median over-estimates the population median:

  • population = 508.7

  • sample mean for n=10 = 522.1

  • sample mean for n=20 = 518.4

  • sample mean for n=50 = 511.5

  • sample mean for n=100 = 508.9

  • sample mean for n=500 = 508.7

  • sample mean for n=1000 = 508.7

Unlike what happened with the mean, the approximation does not get closer to the true value with more experiments. Let’s try 10,000 experiments of n=10:

  • sample mean = 523.9

There is systematic shift between average sample estimates and the population value: thus the sample median is a biased estimate of the population median. Fortunately, this bias can be corrected using the bootstrap.

Bias estimation and bias correction

A simple technique to estimate and correct sampling bias is the percentile bootstrap. If we have a sample of n observations:

  • sample with replacement n observations from our original sample

  • compute the estimate

  • perform steps 1 and 2 nboot times

  • compute the mean of the nboot bootstrap estimates

The difference between the estimate computed using the original sample and the mean of the bootstrap estimates is a bootstrap estimate of bias.

Let’s consider one sample of 10 observations from our skewed distribution. It’s median is: 535.9

The population median is 508.7, so our sample considerably over-estimate the population value.

Next, we sample with replacement from our sample, and compute bootstrap estimates of the median. The distribution obtained is a bootstrap estimate of the sampling distribution of the median. The idea is this: if the bootstrap distribution approximates, on average, the shape of the sampling distribution of the median, then we can use the bootstrap distribution to estimate the bias and correct our sample estimate. However, as we’re going to see, this works on average, in the long run. There is no guarantee for a single experiment.


Using the current data, the mean of the bootstrap estimates is  722.8.

Therefore, our estimate of bias is the difference between the mean of the bootstrap estimates and the sample median = 187.

To correct for bias, we subtract the bootstrap bias estimate from the sample estimate:

sample median – (mean of bootstrap estimates – sample median)

which is the same as:

2 x sample median – mean of bootstrap estimates.

Here the bias corrected sample median is 348.6. Quite a drop from 535.9. So the sample bias has been reduced dramatically, clearly too much. But bias is a long run property of an estimator, so let’s look at a few more examples. We take 100 samples of n = 10, and compute a bias correction for each of them. The arrows go from the sample median to the bias corrected sample median. The vertical black line shows the population median.


  • population median =  508.7

  • average sample median =  515.1

  • average bias corrected sample median =  498.8

So across these experiments, the bias correction was too strong.

What happens if we perform 1000 experiments, each with n=10, and compute a bias correction for each one? Now the average of the bias corrected median estimates is much closer to the true median.

  • population median =  508.7

  • average sample median =  522.1

  • average bias corrected sample median =  508.6

It works very well!

But that’s not always the case: it depends on the estimator and on the amount of skewness.

I’ll cover median bias correction in more detail in a future post. For now, if you study skewed distributions (most bounded quantities such as time measurements are skewed) and use the median, you should consider correcting for bias. But it’s unclear in which situation to act: clearly, bias decreases with sample size, but with small sample sizes the bias can be poorly estimated, potentially resulting in catastrophic adjustments. Clearly more simulations are needed…