Monthly Archives: March 2016

Priors for Bayesian estimation of visual object processing speed in humans

In Bieniek et al. 2015 we reported EEG estimates of visual object processing onsets from a sample of 120 human adult participants aged 18-81. Among these participants, 74 were tested a second time for test-retest assessment. All the details are described in the paper, which is open access. The main outcomes of that paper were onset estimation in every participant and robust descriptive group statistics with various frequentist confidence intervals. Although I still think we could improve on the onset detection technique (notably using shape matching instead of amplitude boundary crossing), we can leave that issue aside for the moment and assume that we’re dealing with a decent sample of EEG onsets. Here I want to revisit the main group analysis using a Bayesian approach. In doing so, I will cover a very Bayesian problem: how to integrate information across experiments. I will focus on a very simple one-sample case, because that’s the nature of the data, and more importantly because this is my first attempt at Bayesian statistics!

Code and data

The R & JAGS code and the data to reproduce all the analyses and the figures are available in the R script onset_priors.R located in this stand-alone folder.

Analyses were performed in R version 3.2.2, using JAGS version 4.1.0, and R functions and examples from the BEST package version 2 by John Kruschke. The R script used to generate the results and the figures assumes that you have read Kruschke’s BEST paper and installed his code and dependencies⁠. Much more can be found in his excellent book.

For the onset results, a more comprehensive dataset + Matlab code is available here:

Rousselet, Guillaume; Bieniek, Magdalena; Bennett, Patrick; Sekuler, Allison (2015): Face-noise ERP onsets from 194 recording sessions. figshare.

The EEG data used to compute the onsets are available here:

Bieniek MM, Bennett PJ, Sekuler AB, Rousselet GA (2015) Data from: A robust and representative lower bound on object processing speed in humans. Dryad Digital Repository.

Fake data

Before we get started on our experimental data, we should explore a toy example for which we know exactly what to expect. Indeed, it can be a very bad idea to apply completely new tools to your own data, because there are potentially too many unknowns, and no easy way to check that the tools are doing what they’re supposed to. So first we create fake data to check that the procedure works. We make a sample of 20 observations with a mean of 100 and standard deviation of 5.

Run Bayesian analysis using BEST’s defaults

We use BEST’s defaults with broad priors, 3 chains, 500 adaptation steps, 1,000 burn-in steps and 10,000 monitoring steps. Here are the results:


Figure 1. Parameter estimation given BEST’s defaults broad priors. Panels A-D show histograms of credible values from the posterior distribution for the mean, standard deviation, effect size and normality parameters. In panel A, the expected value of 100 is indicated in green under the mode of the marginal mean distribution. This value appears between 2 values: the percentage of the posterior distribution below and above the expected value. HDI stands for highest density interval. Panel E shows examples of representative posterior predictive t distributions. These examples appear in blue and are combinations of value parameters illustrated in panels A, B and D. In red is a histogram of the data. The distributions in blue are not data, they are posterior predictive distributions given our priors and the data.

The results are as expected given the fake data we have generated: the MCMC (Markov Chain Monte Carlo) Gibbs algorithm seems to do a good job at sampling regions of the posterior distribution that are compatible with the data. But how do we assess the quality of the analysis? This is a huge topic covered in part in Kruschke’s book, Doing Bayesian data analysis, 2nd edition. He offers a diagnostic plotting function diagMCMC to investigate the behaviour of the chains.

Run diagnostic tools

We run diagMCMC on the mu parameter and obtain the figure below. Although Kruschke suggests to use this figure to determine if the chains have converged, it is not a very appropriate description, because strictly speaking Markov chains do not converge. So what does this figure actually tells us?


Figure 2. MCMC diagnostics. A Trace plots of iterations, or steps, of the 3 chains for parameter mu (the mean). The first 1,500 steps were discarded because of the default 500 adaptation steps and 1000 burn in steps. The strong overlap among the 3 chains suggest that they are representative of the posterior distribution. B Autocorrelation functions for lags 0 to 35 for the 3 chains. This gives an indication of the number of steps necessary for the chains to provide independent samples from the posterior distribution. Here it looks as if 5 steps between samples would be sufficient to obtain a more representative estimation of the posterior distribution. ESS is the effective sample size: it estimates the length of a chain without autocorrelation that would give us the same information as the current one. Here we used 10,000 steps, so an ESS of ~6,000 suggests redundancy among samples which are taken from very similar parts of the distribution. For an accurate description of the 95% HDI, an ESS of at least 10,000 is recommended, according to Kruschke; for particular applications it would be wise to validate that number. C Shrink factor over iterations (steps) in the chains. The shrink factor is a measure of between chain variance relative to within chain variance. A value close to 1 again suggests that the chains are sampling similar representative regions of the posterior distribution. A shrink factor \<1.1 is a recommended cut-off according to Kruschke. D Density plots of the parameter value for the 3 chains, and the 95% HDI for each chain. The 3 density plots and HDIs overlap well, suggesting again that the parameter values have been sampled from similar representative regions of the posterior distribution. MCSE is the Monte Carlo standard error – the lower the better. I do not understand yet how MCSE, ESS and shrink factors are calculated.

The 3 chains appear to be well mixed and to be sampling from a high-probability region. However there is some autocorrelation, and the ESS is ~6000. Autocorrelation can be associated with poor, unrepresentative, sampling of the posterior distribution. To see if this is a concern, we could run the MCMC analysis again, and compare the 95% HDIs. If they are very similar between analyses, then we have probably sampled sufficiently from similar representative portions of the posterior distribution, and there is nothing to worry about. An alternative is to compare the results from the 3 chains we ran independently: in panel D of the figure above, the 3 marginal distributions of the mean and their associated 95% HDIs are very similar, so the sampling was probably sufficient.

To decrease the autocorrelation, we could also thin the chains, or increase the number of steps. See the very useful discussion in Kruschke’s blog suggesting that thinning is a poor strategy against auto-correlation. It seems that introducing burn-in steps is also unnecessary.

Results with 5 thinning steps

Let’s try thinning anyway. I’ll report on better strategies once I have performed some simulations. In this example, using 5 thinning steps gets rid of the auto-correlation and might be sufficient to provide stable estimation of the mean and its 95% HDI:


Figure 3. MCMC diagnostics for analyses with 5 thinning steps.

The results show improved HDI consistency across the 3 chains, from already very consistent results. In this case, 5 thinning steps were sufficient to get an ESS around the expected 10,000, although running the same analysis again produced inconsistent results with ESS ranging from ~8000 to ~10,000. If our main interest is the mode and the 95% HDI of the mean of the samples from the posterior distribution, then thinning does not make much difference compared to the previous results. The required precision of the estimation depends on the application of course.

Results are robust to outliers

In the lab, we only adopt tools that are robust against outliers. Certainly, t-tests and ANOVAs on means are not robust. What about our new Bayesian tool? To check, we add outliers to our sample of fake data, and run the analysis again. The results are very convincing:


Figure 4. Parameter estimation given data with outliers.

Although the estimation of SD has changed, here our main interest is to estimate the mean of probable underlying distributions compatible with the data. And for the mean the results with outliers are almost identical to those without outliers. Thus, MCMC with a t distribution is robust: the estimation of the mean is barely different from the original, and within random fluctuations expected from running the same MCMC several times. You can try different combinations of outliers to convince yourselves: personally, I’m sold.

Impact of priors

How are the results affected by our choice of priors? Instead of BEST’s default settings, we can for instance run the analysis using a uniform prior on mean and a gamma prior on SD. We strongly expect onsets to be contained in the broad interval 0-200 ms, so we use a uniform distribution spanning that interval. This gives results very similar to the original ones, which is expected because by default BEST uses very broad priors.


Figure 5. Parameter estimation given uniform prior on the mean.

But what happens if we run the analysis with strong priors on the mean? Let’s assume that we expect onsets to have a mean around 80 ms, with little dispersion around that value (mean prior SD=3). In that case the posterior predictive samples for the mean look very different from those obtained using broad priors (Figures 1 & 5):



Figure 6. Posterior distribution of the mean given a strong 80 ms prior. The mode is used as central tendency of the distribution – although it can be more unstable than the median, it tends to be more representative. The expected value of 100 is indicated in green under the mode and marked by a green vertical dotted line. This value appears between 2 values: the percentage of the posterior distribution below and above the expected value. Under that, in red, is a [95, 105] ROPE, marked by vertical doted lines. The 3 percentage values indicate the percentage of the MCMC distribution below, inside, and above the ROPE. Finally, the 95% HDI is indicated by a thick horizontal black line at the bottom of the histogram.

That’s reassuring, because if we have strong prior knowledge about the underlying distribution from which the data might be sampled, a new experiment should not be able to override that knowledge. Here, in particular, the 95% HDI does not include 100, the sample mean. So should we conclude that 100 is not a credible value for the underlying distribution, given our prior knowledge and the current experiment? If we were dealing with processing speed estimates from EEG recordings, our decision should be guided by other physiological considerations. For instance, if we think that our measurements originate from one particular cortical area, and after considering conduction times  between areas & other factors (Salin & Bullier, 1995; Nowak & Bullier, 1997), we could use for instance a narrow margin of error of 5 ms on either side of 100 ms. We would then define a region of practical equivalence or ROPE of [95, 105]. Because the HDI does overlap with the proposed ROPE, we should probably reserve judgement about whether 100 is credible given the data.

Bayesian analysis of EEG onsets

Now that we’ve gained some confidence in the tools, let’s do a Bayesian analysis of our real dataset of EEG onsets. The variable ses1 contains onsets from 120 participants. ses2 contains onsets from 74 participants who also provided ses1 onsets. Although this is a paired design, and the results can be used to investigate test-retest reliability (see our EJN 2015 paper), here we treat the two sessions as independent.

First, let’s plot kernel density estimates for the two sessions:


Figure 7. Kernel density estimates for session 1 and session 2 observations.

Bayesian analysis on session 1 using BEST’s defaults

To start, we use BEST’s defaults. Before we consider the results, we call the diagnostic plotting function diagMCMC for mu, our main parameter of interest.


Figure 8. MCMC diagnostics for session 1 onset analysis using BEST’s defaults.

The diagnostic tools suggests the chains are well mixed and seem to be sampling from a high-probability region. However the ESS is ~6000, which calls for some thinning or more steps. So let’s try again with 20,000 steps:


Figure 9. MCMC diagnostics for session 1 onset analysis using 20,000 steps.

ESS is now ~10000, and all the diagnostic indicators look good.  It seems we’ve got a good sampling of the posterior distribution. So let’s display the results, with a 100 ms onset for comparison:


Figure 10. Parameter estimation for session 1 onset data using 20,000 iterations.

Here we get a mode of 92.6 and 95% HDI [88.4, 96.7]. If we had the hypothesis of a mean onset of 100 ms, the results suggest a credible mean lower than 100 ms. Let’s plot only mean posterior samples:


Figure 11. Session 1 onset posterior distribution of the mean.

A 95-105 ms ROPE overlaps with the 95% HDI, so we conclude that 100 ms is still a credible value.

Bayesian analysis on session 1 using informed priors

What happens if we use more informed priors? For instance, the literature suggests that face onsets are around 50-150 ms. Here are the results using a uniform distribution on that interval:


Figure 12. Parameter estimation for session 1 data using a uniform prior.

And the mean posterior only, with a [95, 105] ROPE:


Figure 13. Session 1 onset posterior distribution of the mean given a uniform prior.

As expected with uniform priors, the results changed very little compared to using BEST’s default options.

Bayesian analysis of session 2 results

With our new posterior estimates, we can now study results from session 2 using more informed priors. We plugin estimates of the posterior samples from session 1 as priors for session 2. This makes a lot of sense given that session 1 has a large sample size (for this type of research) and is therefore our best description of the population we’re trying to estimate. Here are the results:


Figure 14. Parameter estimation for session 2 onset data given session 1 priors.


Figure 15. Session 2 onset posterior distribution of the mean given session 1 priors.

Compared to session 1 estimation, we’ve gained information: the ROPE is now completely outside the 95% HDI, which is [87.3, 93.4], and the mode of the mean is 90.3. So 100 ms is no longer a credible onset value.

What do we get if we pretend that session 1 does not exist?

  • session 2 results with session 1 priors: 95% HDI = [87.3, 93.4], mode of the mean = 90.3 (Figure 15)

  • session 2 results with broad priors: 95% HDI = [83.2, 92.2], mode of the mean = 88.1

Because the results are very similar across the two sessions, using broad priors instead of session 1 priors give very similar results. However, session 2 estimates using broad priors are shifted to the left, because the session 2 data distribution suggests a slightly lower mean compared to session 1.

In our EJN paper, we reported these median onsets with 95% percentile bootstrap confidence intervals:
– session 1 = 92 ms [85, 99]
– session 2 = 85 ms [79, 92]
Our Bayesian estimation across sessions suggests that the underlying distribution’s most credible mean is 90 ms. The 95% HDI is also about twice narrower than the 95% confidence interval, so we’ve reduced the uncertainty about the mean.

The same procedure employed in this section can be used to analyse a new distribution of onsets from similar experiments. We have collected a smaller number of observations in a series of new experiments, and we will be analysing the results using as priors session 2 posteriors, themselves obtained given session 1 posteriors.

Comparison to a new point estimate

Let’s assume someone finds an onset at 50 ms, probably using group statistics, such that there is only a single point estimate available. Before writing a paper declaring “face processing takes 50 ms”, is 50 ms credible given our estimation from 2 large samples? Let’s assume a ROPE of 10 ms on each side of the point estimate, so that we would consider 50 ms not credible only if our previous 95% HDI does not contain the interval 40-60 ms at all. There are other ways to estimate the ROPE, for instance by sampling participants with replacement many times and compute the group estimate for every iteration. Given that most MEEG papers I have ever read report only a single value, it’s very likely that future studies will focus on point estimates. Here are the results with a [40, 60] ROPE:


Figure 16. Comparison between a 50 ms point estimate + ROPE & session 2 onset posterior distribution of the mean.

Well, as you’ve guessed, the answer is no, 50 ms is not a statistically credible onset! It is also not physiologically credible! And as incredible as such a 50 ms processing speed claim might seem, it has been made before, and I’m received recently a paper making such a claim to review… Now I can point the authors to this blog. Go Bayes!

One simple step to improve statistical inferences

There are many changes necessary to improve the quality of neuroscience & psychology research. Suggestions abound to increase science openness, promote better experimental designs, and educate researchers about statistical inferences. These changes are necessary and will take time to implement. As part of this process, here, I’d like to propose one simple step to dramatically improve the assessment of statistical results in psychology & neuroscience: to ban bar graphs.


The benefits of illustrating data distributions has been emphasised in many publications and is often the topic of one of the first chapters of introductory statistics books. One of the most striking example is provided by Anscombe’s quartet, in which very different distributions are associated with the same summary statistics:


Moving away from bar graphs can achieve a badly needed departure from current statistical standards. Indeed, using for instance scatterplots instead of bar graphs can help shift the emphasis from the unproductive significant vs. non-significant dichotomy to a focus on what really matters: effect sizes and individual differences. By effect size, here, I do not mean Cohen’s d and other normalised non-robust equivalents (Wilcox, 2006); I mean, literally how big the effect is. Why does it matter? Say you have a significant group effect, it would be (more) informative to answer these questions as well:

  • how many participants actually show an effect in the same direction as the group?
  • how many participants show no effect, or an effect in the opposite direction as the group?
  • is there a smooth continuum of effects across participants, or can we identify sub-clusters of participants who appear to behave differently from the rest?
  • exactly how big are the individual results? For instance, what does it mean for a participant to be 20 ms faster in one condition than another? What if someone else is 40 ms faster? Our incapacity to answer these last questions in many situations simply reflects our lack of knowledge and the poverty of our models and predictions. But this is not an excuse to hide the richness of our data behind bar graphs.

Let’s consider an example from a published paper, which I will not identify. On the left is the bar graph alone representation, whereas the right panel contains both bars and scatterplots. The graphs show results from two independent groups: participants in each group were tested in two conditions, and the pairwise differences are illustrated here. For paired designs, illustrating each condition of the pair separately is inadequate to portray effect sizes because one doesn’t know which points are part of a pair. So here the authors selected the best option: to plot the differences, so that readers can appreciate effect sizes and their distributions across participants. Then they performed two mixed linear analyses, one per group, and found a significant effect for controls, and a non-significant effect in patients. These results seem well supported by the bar graph on the left, and the authors concluded that unlike controls, patients did not demonstrate the effect.


We can immediately flag two problems with this conclusion. First, the authors did not test the group interaction, which is a common fallacy (Nieuwenhuis et al. 2011). Second, the lack of significance (p<0.05) does not provide evidence for the lack of effect, again a common fallacy (see e.g. Kruschke 2013). And obviously there is a third problem: without showing the content of the bars, I would argue that no conclusion can be drawn at all. Well, in fact the authors did report the graph on the right in the above figure! Strangely, they based their conclusions on the statistical tests instead of simply looking at the data.

The data show large individual differences and overlap between the two distributions. In patients, except for 2 outliers showing large negative effects, the remaining observations are within the range observed in controls. Six patients have results suggesting an effect in the same direction as controls, 2 are near zero, 3 go in the opposite direction. So, clearly, the lack of significant group effect in patients is not supported by the data, and arises from the use of a statistical test non-robust to outliers.

Here is what I would conclude about this dataset: both groups show an effect, but the effect sizes tend to be larger in controls than in patients. There are large individual differences, and in both groups, not all participants seem to show an effect. Because of these inter-participant differences, larger sample sizes need to be tested to properly quantify the effect. In light of the current data, there is evidence that patients do show an effect. Finally, the potential lack of effect in certain control participants, and the rather large effects in some patients, questions the use of this particular effect as a diagnostic tool.

I will describe how I would go about analysing this dataset in another post. At the moment, I would just point out that group analyses are highly questionable when groups are small and heterogenous. In the example above, depending on the goals of the experiment, it might suffice to report the scatterplots and a verbal description, as I provided in the previous paragraph. I would definitely favour that option to reporting a single statistical test of central tendency, whether it is robust or not.

The example of the non-significant statistical test in patients illustrate a critical point: if a paper reports bar graphs and non-significant statistical analyses of the mean, not much can be concluded! There might be differences in other aspects than the mean; central tendency differences might exist, but the assumptions of the test could have been violated because of skewness or outliers for instance. Without informative illustrations of the results, it is impossible to tell.

In my experience as reviewer and editor, once bar graphs are replaced by scatterplots (or boxplots etc.) the story can get much more interesting, subtle, convincing, or the opposite… It depends what surprises the bars are holding. So show your data, and ask others to do the same.

“But what if I have clear effects, low within-group dispersion, and I know what I’m doing? Why can’t I use bar graphs?”

This is rather circular: unless you show the results using, for instance, scatterplots, no one knows for sure that you have clear effects and low within-group dispersion. So, if you have nothing to hide and you want to convince your readers, show your results. And honestly, how often do we get clear effects with low intra-group variability? Showing scatterplots is the start of a discussion about the nature of the results, an invitation to go beyond the significant vs. non-significant dichotomy.

“But scatterplots are ugly, they make my results look messy!”

First, your results are messy – scatterplots do not introduce messiness. Second, there is nothing stopping you from adding information to your scatterplots, for instance lines marking the quartiles of the distributions, or superimposing boxplots or many of the other options available.


Examples of more informative figures

Wilcox, R.R. (2006) Graphical methods for assessing effect size: Some alternatives to Cohen’s d. Journal of Experimental Education, 74, 353-367.

Allen, E.A., Erhardt, E.B. & Calhoun, V.D. (2012) Data visualization in the neurosciences: overcoming the curse of dimensionality. Neuron, 74, 603-608.

Weissgerber, T.L., Milic, N.M., Winham, S.J. & Garovic, V.D. (2015) Beyond bar and line graphs: time for a new data presentation paradigm. PLoS Biol, 13, e1002128.

Overview of robust methods, to go beyond ANOVAs on means

Wilcox, R.R. & Keselman, H.J. (2003) Modern Robust Data Analysis Methods: Measures of Central Tendency. Psychological Methods, 8, 254-274.

Wilcox, R.R. (2012) Introduction to robust estimation and hypothesis testing. Academic Press.

Extremely useful resources to go Bayesian

Kruschke, J.K. (2015) Doing Bayesian data analysis : a tutorial with R, JAGS, and Stan. Academic Press, San Diego, CA.

Understanding Bayes

Other references

Kruschke, J.K. (2013) Bayesian estimation supersedes the t test. J Exp Psychol Gen, 142, 573-603.

Nieuwenhuis, S., Forstmann, B.U. & Wagenmakers, E.J. (2011) Erroneous analyses of interactions in neuroscience: a problem of significance. Nat Neurosci, 14, 1105-1107.