Category Archives: from the lab

How to illustrate a 2×2 mixed ERP design

Let’s consider a simple mixed ERP design with 2 repeated measures (2 tasks) and 2 independent groups of participants (young and older participants). The Matlab code and the data are available on github. The data are time-courses of mutual information, with one vector time-course per participant and task. These results are preliminary and have not been published yet, but you can get an idea of how we use mutual information in the lab in recent publications (Ince et al. 2016a, 2016b; Rousselet et al. 2014). The code and illustrations presented in the rest of the post are not specific to mutual information.

Our 2 x 2 experimental design could be analysed using the LIMO EEG toolbox for instance, by computing a 2 x 2 ANOVA at every time point, and correcting for multiple comparisons using cluster based bootstrap statistics (Pernet et al. 2011, 2015). LIMO EEG has been used to investigate task effects for instance (Rousselet et al. 2011). But here, instead of ANOVAs, I’d like to focus on graphical representations and non-parametric assessment of our simple group design, to focus on effect sizes and to demonstrate how a few figures can tell a rich data-driven story.

First, we illustrate the 4 cells of our design. Figure 1 shows separately each group and each task: in each cell all participants are superimposed using thin coloured lines. We can immediately see large differences among participants and between groups, with overall smaller effects (mutual information) in older participants. There also seems to be task differences, in particular in young participants, which tend to present more sustained effects past 200 ms in the expressive task than the gender task.

fig1_gpmi2x2

Figure 1

To complement the individual traces, we can add measures of central tendency. The mean is shown with a thick green line, the median with a thick black line. See how the mean can be biased compared to the median in the presence of extreme values. The median was calculated using the Harrell-Davis estimator of the 50th quantile. To illustrate the group median with a measure of uncertainty, we can add a 95% percentile bootstrap confidence interval for instance (Figure 2).

fig2_gpmi2x2_ci

We can immediately see discrepancies between the median time-courses and their confidence intervals on the one hand, and the individual time-courses on the other hand. There are indeed many distributions of participants that can lead to the same average time-course. That’s why it is essential to show individual results, at least in some illustrations.

In our 2 x 2 design, we now have 3 aspects to consider: group differences, task differences and their interactions. We illustrate them in turn.

Age group differences for each task

We can look at the group differences in each task separately, as shown in Figure 3. The medians of each group is shown with 95% percentile bootstrap confidence intervals. On average, older participants tend to have weaker mutual information than young participants – less than half around 100-200 ms post-stimulus. This will need to be better quantified, for instance by reporting the median of all pairwise differences.

fig3_gpmi_group_diff

Figure 3

Under each panel showing the median + CI for each group, we plot the time-course of the group differences (young-older), with a confidence interval. For group comparisons we cannot illustrate individuals, because participants are be paired. However, we can illustrate all the bootstrap samples, shown in grey. Each sample was obtained by:

  • sampling with replacement Ny observations among Ny young observers
  • sampling with replacement No observations among No older observers
  • compute the median of each group
  • subtract the two medians

It is particularly important to illustrate the bootstrap distributions if they are skewed or contain outliers, or both, to check that the confidence intervals provide a good summary. If the bootstrap samples are very skewed, highest density intervals might be a good alternative to classic confidence intervals.

The lower panels of Figure 3 reveal relatively large group differences in a narrow window within 200 ms. The effect also appears to be stronger in the expressive task. Technically, one could also say that the effects are statistically significant, in a frequentist sense, when the 95% confidence intervals do not include zero. But not much is gained from that because some effects are large and others are small. Correction for multiple comparisons would also be required.

Task differences for each group

Figure 4 has a similar layout to Figure 3, now focusing on the task differences. The top panels suggest that the group medians don’t differ much between tasks, except maybe in young participants around 300-500 ms.

fig4_gpmi_task_effects

Figure 4

Because task effects are paired, we are not limited to the comparison of the medians between tasks; we can also illustrate the individual task differences and the medians of these differences [1]. These are shown in the bottom panels of Figure 4. In both groups, the individual differences are large and the time-courses of the task differences are scattered around zero, except in the young group starting around 300 ms, where most participants have positive differences (expressive > gender).

[1] When the mean is used as a measure of central tendency, these two perspectives are identical, because the difference between two means is the same as the mean of the pairwise differences. However, this is not the case for the median: the difference between medians is not the same as the median of the differences. Because we are interested in effect sizes, it is more informative to report descriptive statistics of the pairwise differences. The advantage of the Matlab code provided with this post is that instead of looking at the median, we can also look at other quantiles, thus getting a better picture of the strength of the effects.

Interaction between tasks and groups

Finally, in Figure 5 we consider the interactions between task and group factors. To do that we first superimpose the medians of the task differences with their confidence intervals (top panel). These traces are the same shown in the bottom panels of Figure 4. I can’t say I’m very happy with the top panel of Figure 5 because the two traces are difficult to compare. Essentially the don’t seem to differ much, except maybe for the late effect in young participants being higher than what is observed in older participants.

fig5_gpmi_task_group_interaction

In the lower panel of Figure 5 we illustrate the age group differences (young – older) between the medians of the pairwise task differences. Again confidence intervals are also provided, along with the original bootstrap samples. Overall, there is very little evidence for a 2 x 2 interaction, suggesting that the age group differences are fairly stable across tasks. Put another way, the weak task effects don’t appear to change much in the two age groups.

References

Ince, R.A., Jaworska, K., Gross, J., Panzeri, S., van Rijsbergen, N.J., Rousselet, G.A. & Schyns, P.G. (2016a) The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres. Cereb Cortex.

Ince, R.A., Giordano, B.L., Kayser, C., Rousselet, G.A., Gross, J. & Schyns, P.G. (2016b) A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula. Hum Brain Mapp.

Pernet, C.R., Chauveau, N., Gaspar, C. & Rousselet, G.A. (2011) LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data. Comput Intell Neurosci, 2011, 831409.

Pernet, C.R., Latinus, M., Nichols, T.E. & Rousselet, G.A. (2015) Cluster-based computational methods for mass univariate analyses of event-related brain potentials/fields: A simulation study. Journal of neuroscience methods, 250, 85-93.

Rousselet, G.A., Gaspar, C.M., Wieczorek, K.P. & Pernet, C.R. (2011) Modeling Single-Trial ERP Reveals Modulation of Bottom-Up Face Visual Processing by Top-Down Task Constraints (in Some Subjects). Front Psychol, 2, 137.

Rousselet, G.A., Ince, R.A., van Rijsbergen, N.J. & Schyns, P.G. (2014) Eye coding mechanisms in early human face event-related potentials. J Vis, 14, 7.

Priors for Bayesian estimation of visual object processing speed in humans

In Bieniek et al. 2015 we reported EEG estimates of visual object processing onsets from a sample of 120 human adult participants aged 18-81. Among these participants, 74 were tested a second time for test-retest assessment. All the details are described in the paper, which is open access. The main outcomes of that paper were onset estimation in every participant and robust descriptive group statistics with various frequentist confidence intervals. Although I still think we could improve on the onset detection technique (notably using shape matching instead of amplitude boundary crossing), we can leave that issue aside for the moment and assume that we’re dealing with a decent sample of EEG onsets. Here I want to revisit the main group analysis using a Bayesian approach. In doing so, I will cover a very Bayesian problem: how to integrate information across experiments. I will focus on a very simple one-sample case, because that’s the nature of the data, and more importantly because this is my first attempt at Bayesian statistics!

Code and data

The R & JAGS code and the data to reproduce all the analyses and the figures are available in the R script onset_priors.R located in this stand-alone folder.

Analyses were performed in R version 3.2.2, using JAGS version 4.1.0, and R functions and examples from the BEST package version 2 by John Kruschke. The R script used to generate the results and the figures assumes that you have read Kruschke’s BEST paper and installed his code and dependencies⁠. Much more can be found in his excellent book.

For the onset results, a more comprehensive dataset + Matlab code is available here:

Rousselet, Guillaume; Bieniek, Magdalena; Bennett, Patrick; Sekuler, Allison (2015): Face-noise ERP onsets from 194 recording sessions. figshare. https://dx.doi.org/10.6084/m9.figshare.1588513.v2.

The EEG data used to compute the onsets are available here:

Bieniek MM, Bennett PJ, Sekuler AB, Rousselet GA (2015) Data from: A robust and representative lower bound on object processing speed in humans. Dryad Digital Repository. http://dx.doi.org/10.5061/dryad.46786

Fake data

Before we get started on our experimental data, we should explore a toy example for which we know exactly what to expect. Indeed, it can be a very bad idea to apply completely new tools to your own data, because there are potentially too many unknowns, and no easy way to check that the tools are doing what they’re supposed to. So first we create fake data to check that the procedure works. We make a sample of 20 observations with a mean of 100 and standard deviation of 5.

Run Bayesian analysis using BEST’s defaults

We use BEST’s defaults with broad priors, 3 chains, 500 adaptation steps, 1,000 burn-in steps and 10,000 monitoring steps. Here are the results:

fig1_test

Figure 1. Parameter estimation given BEST’s defaults broad priors. Panels A-D show histograms of credible values from the posterior distribution for the mean, standard deviation, effect size and normality parameters. In panel A, the expected value of 100 is indicated in green under the mode of the marginal mean distribution. This value appears between 2 values: the percentage of the posterior distribution below and above the expected value. HDI stands for highest density interval. Panel E shows examples of representative posterior predictive t distributions. These examples appear in blue and are combinations of value parameters illustrated in panels A, B and D. In red is a histogram of the data. The distributions in blue are not data, they are posterior predictive distributions given our priors and the data.

The results are as expected given the fake data we have generated: the MCMC (Markov Chain Monte Carlo) Gibbs algorithm seems to do a good job at sampling regions of the posterior distribution that are compatible with the data. But how do we assess the quality of the analysis? This is a huge topic covered in part in Kruschke’s book, Doing Bayesian data analysis, 2nd edition. He offers a diagnostic plotting function diagMCMC to investigate the behaviour of the chains.

Run diagnostic tools

We run diagMCMC on the mu parameter and obtain the figure below. Although Kruschke suggests to use this figure to determine if the chains have converged, it is not a very appropriate description, because strictly speaking Markov chains do not converge. So what does this figure actually tells us?

fig2_test_diag_mu.jpg

Figure 2. MCMC diagnostics. A Trace plots of iterations, or steps, of the 3 chains for parameter mu (the mean). The first 1,500 steps were discarded because of the default 500 adaptation steps and 1000 burn in steps. The strong overlap among the 3 chains suggest that they are representative of the posterior distribution. B Autocorrelation functions for lags 0 to 35 for the 3 chains. This gives an indication of the number of steps necessary for the chains to provide independent samples from the posterior distribution. Here it looks as if 5 steps between samples would be sufficient to obtain a more representative estimation of the posterior distribution. ESS is the effective sample size: it estimates the length of a chain without autocorrelation that would give us the same information as the current one. Here we used 10,000 steps, so an ESS of ~6,000 suggests redundancy among samples which are taken from very similar parts of the distribution. For an accurate description of the 95% HDI, an ESS of at least 10,000 is recommended, according to Kruschke; for particular applications it would be wise to validate that number. C Shrink factor over iterations (steps) in the chains. The shrink factor is a measure of between chain variance relative to within chain variance. A value close to 1 again suggests that the chains are sampling similar representative regions of the posterior distribution. A shrink factor \<1.1 is a recommended cut-off according to Kruschke. D Density plots of the parameter value for the 3 chains, and the 95% HDI for each chain. The 3 density plots and HDIs overlap well, suggesting again that the parameter values have been sampled from similar representative regions of the posterior distribution. MCSE is the Monte Carlo standard error – the lower the better. I do not understand yet how MCSE, ESS and shrink factors are calculated.

The 3 chains appear to be well mixed and to be sampling from a high-probability region. However there is some autocorrelation, and the ESS is ~6000. Autocorrelation can be associated with poor, unrepresentative, sampling of the posterior distribution. To see if this is a concern, we could run the MCMC analysis again, and compare the 95% HDIs. If they are very similar between analyses, then we have probably sampled sufficiently from similar representative portions of the posterior distribution, and there is nothing to worry about. An alternative is to compare the results from the 3 chains we ran independently: in panel D of the figure above, the 3 marginal distributions of the mean and their associated 95% HDIs are very similar, so the sampling was probably sufficient.

To decrease the autocorrelation, we could also thin the chains, or increase the number of steps. See the very useful discussion in Kruschke’s blog suggesting that thinning is a poor strategy against auto-correlation. It seems that introducing burn-in steps is also unnecessary.

Results with 5 thinning steps

Let’s try thinning anyway. I’ll report on better strategies once I have performed some simulations. In this example, using 5 thinning steps gets rid of the auto-correlation and might be sufficient to provide stable estimation of the mean and its 95% HDI:

fig3_test_with_thinning_diag_mu.jpg

Figure 3. MCMC diagnostics for analyses with 5 thinning steps.

The results show improved HDI consistency across the 3 chains, from already very consistent results. In this case, 5 thinning steps were sufficient to get an ESS around the expected 10,000, although running the same analysis again produced inconsistent results with ESS ranging from ~8000 to ~10,000. If our main interest is the mode and the 95% HDI of the mean of the samples from the posterior distribution, then thinning does not make much difference compared to the previous results. The required precision of the estimation depends on the application of course.

Results are robust to outliers

In the lab, we only adopt tools that are robust against outliers. Certainly, t-tests and ANOVAs on means are not robust. What about our new Bayesian tool? To check, we add outliers to our sample of fake data, and run the analysis again. The results are very convincing:

fig4_test_with_outliers.jpg

Figure 4. Parameter estimation given data with outliers.

Although the estimation of SD has changed, here our main interest is to estimate the mean of probable underlying distributions compatible with the data. And for the mean the results with outliers are almost identical to those without outliers. Thus, MCMC with a t distribution is robust: the estimation of the mean is barely different from the original, and within random fluctuations expected from running the same MCMC several times. You can try different combinations of outliers to convince yourselves: personally, I’m sold.

Impact of priors

How are the results affected by our choice of priors? Instead of BEST’s default settings, we can for instance run the analysis using a uniform prior on mean and a gamma prior on SD. We strongly expect onsets to be contained in the broad interval 0-200 ms, so we use a uniform distribution spanning that interval. This gives results very similar to the original ones, which is expected because by default BEST uses very broad priors.

fig5_test_with_unif_prior.jpg

Figure 5. Parameter estimation given uniform prior on the mean.

But what happens if we run the analysis with strong priors on the mean? Let’s assume that we expect onsets to have a mean around 80 ms, with little dispersion around that value (mean prior SD=3). In that case the posterior predictive samples for the mean look very different from those obtained using broad priors (Figures 1 & 5):

fig6_test_with_strong_priors

 

Figure 6. Posterior distribution of the mean given a strong 80 ms prior. The mode is used as central tendency of the distribution – although it can be more unstable than the median, it tends to be more representative. The expected value of 100 is indicated in green under the mode and marked by a green vertical dotted line. This value appears between 2 values: the percentage of the posterior distribution below and above the expected value. Under that, in red, is a [95, 105] ROPE, marked by vertical doted lines. The 3 percentage values indicate the percentage of the MCMC distribution below, inside, and above the ROPE. Finally, the 95% HDI is indicated by a thick horizontal black line at the bottom of the histogram.

That’s reassuring, because if we have strong prior knowledge about the underlying distribution from which the data might be sampled, a new experiment should not be able to override that knowledge. Here, in particular, the 95% HDI does not include 100, the sample mean. So should we conclude that 100 is not a credible value for the underlying distribution, given our prior knowledge and the current experiment? If we were dealing with processing speed estimates from EEG recordings, our decision should be guided by other physiological considerations. For instance, if we think that our measurements originate from one particular cortical area, and after considering conduction times  between areas & other factors (Salin & Bullier, 1995; Nowak & Bullier, 1997), we could use for instance a narrow margin of error of 5 ms on either side of 100 ms. We would then define a region of practical equivalence or ROPE of [95, 105]. Because the HDI does overlap with the proposed ROPE, we should probably reserve judgement about whether 100 is credible given the data.

Bayesian analysis of EEG onsets

Now that we’ve gained some confidence in the tools, let’s do a Bayesian analysis of our real dataset of EEG onsets. The variable ses1 contains onsets from 120 participants. ses2 contains onsets from 74 participants who also provided ses1 onsets. Although this is a paired design, and the results can be used to investigate test-retest reliability (see our EJN 2015 paper), here we treat the two sessions as independent.

First, let’s plot kernel density estimates for the two sessions:

fig7_kde_2sessions

Figure 7. Kernel density estimates for session 1 and session 2 observations.

Bayesian analysis on session 1 using BEST’s defaults

To start, we use BEST’s defaults. Before we consider the results, we call the diagnostic plotting function diagMCMC for mu, our main parameter of interest.

fig8_ses1_diag_mu

Figure 8. MCMC diagnostics for session 1 onset analysis using BEST’s defaults.

The diagnostic tools suggests the chains are well mixed and seem to be sampling from a high-probability region. However the ESS is ~6000, which calls for some thinning or more steps. So let’s try again with 20,000 steps:

fig9_ses1_with_more_steps_diag_mu

Figure 9. MCMC diagnostics for session 1 onset analysis using 20,000 steps.

ESS is now ~10000, and all the diagnostic indicators look good.  It seems we’ve got a good sampling of the posterior distribution. So let’s display the results, with a 100 ms onset for comparison:

fig10_ses1

Figure 10. Parameter estimation for session 1 onset data using 20,000 iterations.

Here we get a mode of 92.6 and 95% HDI [88.4, 96.7]. If we had the hypothesis of a mean onset of 100 ms, the results suggest a credible mean lower than 100 ms. Let’s plot only mean posterior samples:

fig11_ses1_mean_posterior

Figure 11. Session 1 onset posterior distribution of the mean.

A 95-105 ms ROPE overlaps with the 95% HDI, so we conclude that 100 ms is still a credible value.

Bayesian analysis on session 1 using informed priors

What happens if we use more informed priors? For instance, the literature suggests that face onsets are around 50-150 ms. Here are the results using a uniform distribution on that interval:

fig12_ses1_unifprior

Figure 12. Parameter estimation for session 1 data using a uniform prior.

And the mean posterior only, with a [95, 105] ROPE:

fig13_ses1_unifprior_mean_posterior

Figure 13. Session 1 onset posterior distribution of the mean given a uniform prior.

As expected with uniform priors, the results changed very little compared to using BEST’s default options.

Bayesian analysis of session 2 results

With our new posterior estimates, we can now study results from session 2 using more informed priors. We plugin estimates of the posterior samples from session 1 as priors for session 2. This makes a lot of sense given that session 1 has a large sample size (for this type of research) and is therefore our best description of the population we’re trying to estimate. Here are the results:

fig14_ses2_ses1prior

Figure 14. Parameter estimation for session 2 onset data given session 1 priors.

fig15_ses2_ses1prior_mean_posterior.jpg

Figure 15. Session 2 onset posterior distribution of the mean given session 1 priors.

Compared to session 1 estimation, we’ve gained information: the ROPE is now completely outside the 95% HDI, which is [87.3, 93.4], and the mode of the mean is 90.3. So 100 ms is no longer a credible onset value.

What do we get if we pretend that session 1 does not exist?

  • session 2 results with session 1 priors: 95% HDI = [87.3, 93.4], mode of the mean = 90.3 (Figure 15)

  • session 2 results with broad priors: 95% HDI = [83.2, 92.2], mode of the mean = 88.1

Because the results are very similar across the two sessions, using broad priors instead of session 1 priors give very similar results. However, session 2 estimates using broad priors are shifted to the left, because the session 2 data distribution suggests a slightly lower mean compared to session 1.

In our EJN paper, we reported these median onsets with 95% percentile bootstrap confidence intervals:
– session 1 = 92 ms [85, 99]
– session 2 = 85 ms [79, 92]
Our Bayesian estimation across sessions suggests that the underlying distribution’s most credible mean is 90 ms. The 95% HDI is also about twice narrower than the 95% confidence interval, so we’ve reduced the uncertainty about the mean.

The same procedure employed in this section can be used to analyse a new distribution of onsets from similar experiments. We have collected a smaller number of observations in a series of new experiments, and we will be analysing the results using as priors session 2 posteriors, themselves obtained given session 1 posteriors.

Comparison to a new point estimate

Let’s assume someone finds an onset at 50 ms, probably using group statistics, such that there is only a single point estimate available. Before writing a paper declaring “face processing takes 50 ms”, is 50 ms credible given our estimation from 2 large samples? Let’s assume a ROPE of 10 ms on each side of the point estimate, so that we would consider 50 ms not credible only if our previous 95% HDI does not contain the interval 40-60 ms at all. There are other ways to estimate the ROPE, for instance by sampling participants with replacement many times and compute the group estimate for every iteration. Given that most MEEG papers I have ever read report only a single value, it’s very likely that future studies will focus on point estimates. Here are the results with a [40, 60] ROPE:

fig16_ses2_comp_to_point_estimate

Figure 16. Comparison between a 50 ms point estimate + ROPE & session 2 onset posterior distribution of the mean.

Well, as you’ve guessed, the answer is no, 50 ms is not a statistically credible onset! It is also not physiologically credible! And as incredible as such a 50 ms processing speed claim might seem, it has been made before, and I’m received recently a paper making such a claim to review… Now I can point the authors to this blog. Go Bayes!