Category Archives: from the lab

Test-retest reliability assessment using graphical methods

UPDATE (2018-05-17): as explained in the now updated previous post, the shift function for pairwise differences, originally described as a great tool to assess test-retest reliability, is completely flawed. The approach using scatterplots remains valid. If you know of other graphical methods, please leave a comment.


Test-retest reliability is often summarised using a correlation coefficient, often without illustrating the raw data. This is a very bad idea given that the same correlation coefficient can result from many different configurations of observations. Graphical representations are thus essential to assess test-retest reliability, as demonstrated for instance in the work of Bland & Altman.

The R code for this post is on github.

Example 1: made up data

Let’s look at a first example using made up data. Imagine that reaction times were measured from 100 participants in two sessions. The medians of the two distributions do not differ much, but the shapes do differ a lot, similarly to the example covered in the previous post.

figure_kde

The kernel density estimates above do not reveal the pairwise associations between observations. This is better done using a scatterplot. In this plot, strong test-retest reliability would show up as a tight cloud of points along the unity line (the black diagonal line).

figure_scatter

Here the observations do not fall on the unity line: instead the relationship leads to a much shallower slope than expected if the test-retest reliability was high. For fast responses in session 1, responses tended to be slower in session 2. Conversely, for slow responses in condition 1, responses tended to be faster in condition 2. This pattern would be expected if there was regression to the mean [wikipedia][ Barnett et al. 2005], that is, particularly fast or particularly slow responses in session 1 were due in part to chance, such that responses from the same individuals in session 2 were closer to the group mean. Here we know this is the case because the data are made up to have that pattern.

We can also use a shift function for dependent group to investigate the relationship between sessions, as we did in the previous post.

figure_sf_dhd

The shift function reveals a characteristic  difference in spread between the two distributions, a pattern that is also expected if there is regression to the mean. Essentially, the shift function shows how  the distribution in session 2 needs to be modified to match the distribution in session 1: the lowest deciles need to be decreased and the highest deciles need to be increased, and these changes should be stronger as we move towards the tails of the distribution. For this example, these changes would be similar to an anti-clockwise rotation of the regression slope in the next figure, to align the cloud of observations with the black diagonal line.  

figure_scatter_regline

To confirm these observations, we also perform a shift function for pairwise differences. 

 

This second type of shift function reveals a pattern very similar to the previous one. In the [previous post], I wrote that this “is re-assuring. But there might be situations where the two versions differ.” Well, here are two such situations…

Example 2: ERP onsets

Here we look at ERP onsets from an object detection task (Bieniek et al. 2016). In that study, 74 of our 120 participants were tested twice, to assess the test-retest reliability of different measurements, including onsets. The distributions of onsets across participants is positively skewed, with a few participants with particularly early or late onsets. The distributions for the two sessions appear quite similar.   

figure_ERP_kde

With these data, we were particularly interested in the reliability of the left and right tails: if early onsets in session 1 were due to chance, we would expect session 2 estimates to be overall larger (shifted to the right); similarly, if late onsets in session 1 were due to chance, we would expect session 2 estimates to be overall smaller (shifted to the left). Plotting session 2 onsets as a function of session 1 onsets does not reveal a strong pattern of regression to the mean as we observed in example 1. 

figure_ERP_scatter1

Adding a loess regression line suggests there might actually be an overall clockwise rotation of the cloud of points relative to the black diagonal.

figure_ERP_scatter1_regline

The pattern is even more apparent if we plot the difference between sessions on the y axis. This is sometimes called a Bland & Altman plot (but here without the SD lines).

figure_ERP_scatter2_regline

However, a shift function on the marginals is relatively flat.

figure_ERP_sf_dhd

Although there seems to be a linear trend, the uncertainty around the differences between deciles is large. In the original paper, we wrote this conclusion (sorry for the awful frequentist statement, I won’t do it again):

“across the 74 participants tested twice, no significant differences were found between any of the onset deciles (Fig. 6C). This last result is important because it demonstrates that test–retest reliability does not depend on onset times. One could have imagined for instance that the earliest onsets might have been obtained by chance, so that a second test would be systematically biased towards longer onsets: our analysis suggests that this was not the case.”

That conclusion was probably wrong, because the shift function for dependent marginals is inappropriate to look at test-retest reliability. Inferences should be made on pairwise differences instead. So, if we use the shift function for pairwise differences, the results are very different! A much better diagnostic tool is to plot difference results as a function of session 1 results. This approach suggests, in our relatively small sample size:

 

  • the earlier the onsets in session 1, the more they increased in session 2, such that the difference between sessions became more negative;
  • the later the onsets in session 1, the more they decreased in session 2, such that the difference between sessions became more positive. 

This result and the discrepancy between the two types of shift functions is very interesting and can be explained by a simple principle: for dependent variables, the difference between 2 means is equal to the mean of the individual pairwise differences; however, this does not have to be the case for other estimators, such as quantiles (Wilcox & Rousselet, 2018).

Also, tThe discrepancy shows that I reached the wrong conclusion in a previous study because I used the wrong analysis. Of course, there is always the possibility that I’ve made a terrible coding mistake somewhere (that won’t be the first time – please let me know if you spot a fatal mistake). So l Let’s look at another example using published clinical data in which regression to the mean was suspected.

Example 3: Nambour skin cancer prevention trial

The data are from a cancer clinical trial described by Barnett et al. (2005). Here is Figure 3 from that paper:

barnett-ije-2005

“Scatter-plot of n = 96 paired and log-transformed betacarotene measurements showing change (log(follow-up) minus log(baseline)) against log(baseline) from the Nambour Skin Cancer Prevention Trial. The solid line represents perfect agreement (no change) and the dotted lines are fitted regression lines for the treatment and placebo groups”

Let’s try to make a similarly looking figure.

figure_nambour_scatter

Unfortunately, the original figure cannot be reproduced because the group membership has been mixed up in the shared dataset… So let’s merge the two groups and plot the data following our shift function convention, in which the difference is session 1 – session 2.

figure_nambour_scatter2

Regression to the mean is suggested by the large number of negative differences and the negative slope of the loess regression: participants with low results in session 1 tended to have higher results in session 2. This pattern can also be revealed by plotting session 2 as a function of session 1.

figure_nambour_scatter3

The shift function for marginals suggests increasing differences between session quantiles for increasing quantiles in session 1.

figure_nambour_sf_dhd

This result seems at odd with the previous plot, but it is easier to understand if we look at the kernel density estimates of the marginals. Thus, plotting difference scores as a function of session 1 scores probably remains the best strategy to have a fine-grained look at test-retest results.

figure_nambour_kde

A shift function for pairwise differences shows a very different pattern, consistent with the regression to the mean suggested by Barnett et al. (2005).

 

Conclusion

To assess test-retest reliability, it is very informative to use graphical representations, which can reveal interesting patterns that would be hidden in a correlation coefficient. Unfortunately, there doesn’t seem to be a magic tool to simultaneously illustrate and make inferences about test-retest reliability.

It seems that the shift function for pairwise differences is an excellent tool to look at test-retest reliability, and to spot patterns of regression to the mean. The next steps for the shift function for pairwise differences will be to perform some statistical validations for the frequentist version, and develop a Bayesian version.

That’s it for this post. If you use the shift function for pairwise differences to look at test-retest reliability, let me know and I’ll add a link here.

References

Barnett, A.G., van der Pols, J.C. & Dobson, A.J. (2005) Regression to the mean: what it is and how to deal with it. Int J Epidemiol, 34, 215-220.

Bland JM, Altman DG. (1986). Statistical methods for assessing agreement between two methods of clinical measurement. Lancet, i, 307-310.

Bieniek, M.M., Bennett, P.J., Sekuler, A.B. & Rousselet, G.A. (2016) A robust and representative lower bound on object processing speed in humans. The European journal of neuroscience, 44, 1804-1814.

Wilcox, R.R. & Rousselet, G.A. (2018) A Guide to Robust Statistical Methods in Neuroscience. Curr Protoc Neurosci, 82, 8 42 41-48 42 30.

Advertisements

Bayesian shift function

Two distributions can differ in many ways, yet the standard approach in neuroscience & psychology is to assume differences in means. That’s why the first step in exploratory data analysis should always be detailed graphical representations (Rousselet et al. 2016, 2017). To help quantify how two distributions differ, a fantastic tool is the shift function – an example is provided below. It consists in plotting the difference between group quantiles, as a function of the quantiles in one group. The technique was first described in the 1970s by Doksum (1974; Doksum & Sievers, 1976), and later refined by Wilcox using the Harrell-Davis quantile estimator (Harrell & Davis, 1982) in conjunction with two percentile bootstrap methods (Wilcox 1995; Wilcox et al. 2014). The technique is related to delta plots and relative distribution methods (see details in Rousselet et al. 2017).  

The goal of this post is to get feedback on my first attempt to make a Bayesian version of the shift function. Below I describe three potential strategies. My main motivation for making a Bayesian version is that the shift function comes with frequentist confidence intervals and p values. Although I still use confidence intervals to describe sampling variability, they are inherently linked to p values and tend to be associated with major flaws in interpretation (Morey et al. 2016). And my experience is that if p values are available, most researchers will embark on lazy and irrational decision making (Wagenmakers 2007). P values would not be a problem if they were just used as one of many pieces of evidence, without any special status (McShane et al. 2017).

Let’s consider a toy example: two independent exGaussian distributions, with n = 100 in each group.

figure_data

The two groups clearly differ, as expected from the generative process (see online code). A t-test on means suggests a large uncertainty about the group difference:

difference = – 65 ms [-166, 37]

t = -1.26

p = 0.21

A shift function provides much more information about how the two groups differ.

figure_sf

The x-axis shows the quantiles of group 1 (here only the 9 deciles, which is a good default). The y-axis shows the difference between deciles of group 1 and group 2. Intuitively, the difference shows by how much group 2 would need to be shifted to match group 1, for each decile. The coloured labels show the quantile differences. I let you go back and forth between the density estimates and the shift function to understand what’s going on. Another detailed description is provided here if needed.  

Strategy 1: Bayesian bootstrap

A simple strategy to make a Bayesian shift function is to use the Bayesian bootstrap instead of a percentile bootstrap. The percentile bootstrap can already be considered a very cheap way to create Bayesian posterior distributions, if we make the strong (and wrong) assumption that our observations are the only possible ones. Rasmus Bååth provides a detailed introduction to the Bayesian bootstrap, and it’s R implementation on his blog. There is also a video.

Using the Bayesian bootstrap, and 95% highest density intervals (HDI) of the posterior distributions, the shift function looks very similar to the original version, as expected. 

figure_bbsf

Except now we’re dealing with credible intervals, and there are no p values, so users have to focus on quantification!

Strategy 2: Bayesian quantile regression

Another strategy is to use quantile regression, which comes in a Bayesian flavour using the asymmetric Laplacian likelihood family. To do that, I’m using the amazing brms R package by Paul Bürkner. 

To get started with Bayesian statistics and the brms package, I recommend this excellent blog post by Matti Vuorre. Many other great posts are available here.

Using the default priors from brms, we can fit a quantile regression line for each group and for each decile. The medians (or means, or modes) and 95% HDIs of the posterior distributions can then be used to create a shift function.

figure_bqrsf

Again, the results are quite similar to the original ones, although this time the quantiles do differ a bit from those in the previous versions because we use the medians of the posterior distributions to estimate them. Also, I haven’t looked in much detail at how well the model fits the data. The posterior predictive samples suggest the fits could be improved, but I have too little experience to make a call.

Strategy 3: Bayesian model with exGaussians

The third strategy is to fit a descriptive model to the distributions, generate samples from the posterior distributions, and compute quantiles from these predicted values. Here, since our toy model simulates reaction time data using exGaussian distributions, it makes sense to fit an exGaussian family to the data. More generally, exGaussian distributions are very good at capturing the shape of RT data (Matzke & Wagenmakers, 2009).

figure_bexgsf

Again, the results look similar to the original ones. This strategy has the advantage to require fitting only one model, instead of a new model for each quantile in strategy 2. In addition, we get an exGaussian fit of the data, which is very useful in itself. So that would probably be my favourite strategy.

Questions to you, reader

Does this make even remotely sense? 

Which strategy seems more promising?

The clear benefit of strategies 2 and 3 is that they can easily be extended to create hierarchical shift functions, by fitting simultaneously observations from multiple participants. Whereas for the original shift function using the percentile bootstrap, I don’t see any obvious way to make a hierarchical version.

You might want to ask, why not simply focus on modelling the distributions instead of looking at the quantiles? Modelling the shape of the distributions is great of course, but I don’t think it can achieve the same fine-grained quantification achieved by the shift function. Another approach of course would be to fit a more psychologically motivated model, such as a diffusion model. I think these three approaches are complementary. 

References

Doksum, K. (1974) Empirical Probability Plots and Statistical Inference for Nonlinear Models in the two-Sample Case. Annals of Statistics, 2, 267-277.

Doksum, K.A. & Sievers, G.L. (1976) Plotting with Confidence – Graphical Comparisons of 2 Populations. Biometrika, 63, 421-434.

Harrell, F.E. & Davis, C.E. (1982) A new distribution-free quantile estimator. Biometrika, 69, 635-640.

Blakeley B. McShane, David Gal, Andrew Gelman, Christian Robert, Jennifer L. Tackett (2017) Abandon Statistical Significance. arXiv:1709.07588

Matzke, D. & Wagenmakers, E.J. (2009) Psychological interpretation of the ex-Gaussian and shifted Wald parameters: a diffusion model analysis. Psychon Bull Rev, 16, 798-817.

Morey, R.D., Hoekstra, R., Rouder, J.N., Lee, M.D. & Wagenmakers, E.J. (2016) The fallacy of placing confidence in confidence intervals. Psychon Bull Rev, 23, 103-123.

Rousselet, G.A., Foxe, J.J. & Bolam, J.P. (2016) A few simple steps to improve the description of group results in neuroscience. The European journal of neuroscience, 44, 2647-2651.

Rousselet, G., Pernet, C. & Wilcox, R. (2017) Beyond differences in means: robust graphical methods to compare two groups in neuroscience, figshare.

Wagenmakers, E.J. (2007) A practical solution to the pervasive problems of p values. Psychonomic bulletin & review, 14, 779-804.

Wilcox, R.R. (1995) Comparing Two Independent Groups Via Multiple Quantiles. Journal of the Royal Statistical Society. Series D (The Statistician), 44, 91-99.

Wilcox, R.R., Erceg-Hurn, D.M., Clark, F. & Carlson, M. (2014) Comparing two independent groups via the lower and upper quantiles. J Stat Comput Sim, 84, 1543-1551.

How to illustrate a 2×2 mixed ERP design

Let’s consider a simple mixed ERP design with 2 repeated measures (2 tasks) and 2 independent groups of participants (young and older participants). The Matlab code and the data are available on github. The data are time-courses of mutual information, with one vector time-course per participant and task. These results are preliminary and have not been published yet, but you can get an idea of how we use mutual information in the lab in recent publications (Ince et al. 2016a, 2016b; Rousselet et al. 2014). The code and illustrations presented in the rest of the post are not specific to mutual information.

Our 2 x 2 experimental design could be analysed using the LIMO EEG toolbox for instance, by computing a 2 x 2 ANOVA at every time point, and correcting for multiple comparisons using cluster based bootstrap statistics (Pernet et al. 2011, 2015). LIMO EEG has been used to investigate task effects for instance (Rousselet et al. 2011). But here, instead of ANOVAs, I’d like to focus on graphical representations and non-parametric assessment of our simple group design, to focus on effect sizes and to demonstrate how a few figures can tell a rich data-driven story.

First, we illustrate the 4 cells of our design. Figure 1 shows separately each group and each task: in each cell all participants are superimposed using thin coloured lines. We can immediately see large differences among participants and between groups, with overall smaller effects (mutual information) in older participants. There also seems to be task differences, in particular in young participants, which tend to present more sustained effects past 200 ms in the expressive task than the gender task.

fig1_gpmi2x2

Figure 1

To complement the individual traces, we can add measures of central tendency. The mean is shown with a thick green line, the median with a thick black line. See how the mean can be biased compared to the median in the presence of extreme values. The median was calculated using the Harrell-Davis estimator of the 50th quantile. To illustrate the group median with a measure of uncertainty, we can add a 95% percentile bootstrap confidence interval for instance (Figure 2).

fig2_gpmi2x2_ci

We can immediately see discrepancies between the median time-courses and their confidence intervals on the one hand, and the individual time-courses on the other hand. There are indeed many distributions of participants that can lead to the same average time-course. That’s why it is essential to show individual results, at least in some illustrations.

In our 2 x 2 design, we now have 3 aspects to consider: group differences, task differences and their interactions. We illustrate them in turn.

Age group differences for each task

We can look at the group differences in each task separately, as shown in Figure 3. The medians of each group is shown with 95% percentile bootstrap confidence intervals. On average, older participants tend to have weaker mutual information than young participants – less than half around 100-200 ms post-stimulus. This will need to be better quantified, for instance by reporting the median of all pairwise differences.

fig3_gpmi_group_diff

Figure 3

Under each panel showing the median + CI for each group, we plot the time-course of the group differences (young-older), with a confidence interval. For group comparisons we cannot illustrate individuals, because participants are be paired. However, we can illustrate all the bootstrap samples, shown in grey. Each sample was obtained by:

  • sampling with replacement Ny observations among Ny young observers
  • sampling with replacement No observations among No older observers
  • compute the median of each group
  • subtract the two medians

It is particularly important to illustrate the bootstrap distributions if they are skewed or contain outliers, or both, to check that the confidence intervals provide a good summary. If the bootstrap samples are very skewed, highest density intervals might be a good alternative to classic confidence intervals.

The lower panels of Figure 3 reveal relatively large group differences in a narrow window within 200 ms. The effect also appears to be stronger in the expressive task. Technically, one could also say that the effects are statistically significant, in a frequentist sense, when the 95% confidence intervals do not include zero. But not much is gained from that because some effects are large and others are small. Correction for multiple comparisons would also be required.

Task differences for each group

Figure 4 has a similar layout to Figure 3, now focusing on the task differences. The top panels suggest that the group medians don’t differ much between tasks, except maybe in young participants around 300-500 ms.

fig4_gpmi_task_effects

Figure 4

Because task effects are paired, we are not limited to the comparison of the medians between tasks; we can also illustrate the individual task differences and the medians of these differences [1]. These are shown in the bottom panels of Figure 4. In both groups, the individual differences are large and the time-courses of the task differences are scattered around zero, except in the young group starting around 300 ms, where most participants have positive differences (expressive > gender).

[1] When the mean is used as a measure of central tendency, these two perspectives are identical, because the difference between two means is the same as the mean of the pairwise differences. However, this is not the case for the median: the difference between medians is not the same as the median of the differences. Because we are interested in effect sizes, it is more informative to report descriptive statistics of the pairwise differences. The advantage of the Matlab code provided with this post is that instead of looking at the median, we can also look at other quantiles, thus getting a better picture of the strength of the effects.

Interaction between tasks and groups

Finally, in Figure 5 we consider the interactions between task and group factors. To do that we first superimpose the medians of the task differences with their confidence intervals (top panel). These traces are the same shown in the bottom panels of Figure 4. I can’t say I’m very happy with the top panel of Figure 5 because the two traces are difficult to compare. Essentially the don’t seem to differ much, except maybe for the late effect in young participants being higher than what is observed in older participants.

fig5_gpmi_task_group_interaction

In the lower panel of Figure 5 we illustrate the age group differences (young – older) between the medians of the pairwise task differences. Again confidence intervals are also provided, along with the original bootstrap samples. Overall, there is very little evidence for a 2 x 2 interaction, suggesting that the age group differences are fairly stable across tasks. Put another way, the weak task effects don’t appear to change much in the two age groups.

References

Ince, R.A., Jaworska, K., Gross, J., Panzeri, S., van Rijsbergen, N.J., Rousselet, G.A. & Schyns, P.G. (2016a) The Deceptively Simple N170 Reflects Network Information Processing Mechanisms Involving Visual Feature Coding and Transfer Across Hemispheres. Cereb Cortex.

Ince, R.A., Giordano, B.L., Kayser, C., Rousselet, G.A., Gross, J. & Schyns, P.G. (2016b) A statistical framework for neuroimaging data analysis based on mutual information estimated via a gaussian copula. Hum Brain Mapp.

Pernet, C.R., Chauveau, N., Gaspar, C. & Rousselet, G.A. (2011) LIMO EEG: a toolbox for hierarchical LInear MOdeling of ElectroEncephaloGraphic data. Comput Intell Neurosci, 2011, 831409.

Pernet, C.R., Latinus, M., Nichols, T.E. & Rousselet, G.A. (2015) Cluster-based computational methods for mass univariate analyses of event-related brain potentials/fields: A simulation study. Journal of neuroscience methods, 250, 85-93.

Rousselet, G.A., Gaspar, C.M., Wieczorek, K.P. & Pernet, C.R. (2011) Modeling Single-Trial ERP Reveals Modulation of Bottom-Up Face Visual Processing by Top-Down Task Constraints (in Some Subjects). Front Psychol, 2, 137.

Rousselet, G.A., Ince, R.A., van Rijsbergen, N.J. & Schyns, P.G. (2014) Eye coding mechanisms in early human face event-related potentials. J Vis, 14, 7.

Priors for Bayesian estimation of visual object processing speed in humans

In Bieniek et al. 2015 we reported EEG estimates of visual object processing onsets from a sample of 120 human adult participants aged 18-81. Among these participants, 74 were tested a second time for test-retest assessment. All the details are described in the paper, which is open access. The main outcomes of that paper were onset estimation in every participant and robust descriptive group statistics with various frequentist confidence intervals. Although I still think we could improve on the onset detection technique (notably using shape matching instead of amplitude boundary crossing), we can leave that issue aside for the moment and assume that we’re dealing with a decent sample of EEG onsets. Here I want to revisit the main group analysis using a Bayesian approach. In doing so, I will cover a very Bayesian problem: how to integrate information across experiments. I will focus on a very simple one-sample case, because that’s the nature of the data, and more importantly because this is my first attempt at Bayesian statistics!

Code and data

The R & JAGS code and the data to reproduce all the analyses and the figures are available in the R script onset_priors.R located in this stand-alone folder.

Analyses were performed in R version 3.2.2, using JAGS version 4.1.0, and R functions and examples from the BEST package version 2 by John Kruschke. The R script used to generate the results and the figures assumes that you have read Kruschke’s BEST paper and installed his code and dependencies⁠. Much more can be found in his excellent book.

For the onset results, a more comprehensive dataset + Matlab code is available here:

Rousselet, Guillaume; Bieniek, Magdalena; Bennett, Patrick; Sekuler, Allison (2015): Face-noise ERP onsets from 194 recording sessions. figshare. https://dx.doi.org/10.6084/m9.figshare.1588513.v2.

The EEG data used to compute the onsets are available here:

Bieniek MM, Bennett PJ, Sekuler AB, Rousselet GA (2015) Data from: A robust and representative lower bound on object processing speed in humans. Dryad Digital Repository. http://dx.doi.org/10.5061/dryad.46786

Fake data

Before we get started on our experimental data, we should explore a toy example for which we know exactly what to expect. Indeed, it can be a very bad idea to apply completely new tools to your own data, because there are potentially too many unknowns, and no easy way to check that the tools are doing what they’re supposed to. So first we create fake data to check that the procedure works. We make a sample of 20 observations with a mean of 100 and standard deviation of 5.

Run Bayesian analysis using BEST’s defaults

We use BEST’s defaults with broad priors, 3 chains, 500 adaptation steps, 1,000 burn-in steps and 10,000 monitoring steps. Here are the results:

fig1_test

Figure 1. Parameter estimation given BEST’s defaults broad priors. Panels A-D show histograms of credible values from the posterior distribution for the mean, standard deviation, effect size and normality parameters. In panel A, the expected value of 100 is indicated in green under the mode of the marginal mean distribution. This value appears between 2 values: the percentage of the posterior distribution below and above the expected value. HDI stands for highest density interval. Panel E shows examples of representative posterior predictive t distributions. These examples appear in blue and are combinations of value parameters illustrated in panels A, B and D. In red is a histogram of the data. The distributions in blue are not data, they are posterior predictive distributions given our priors and the data.

The results are as expected given the fake data we have generated: the MCMC (Markov Chain Monte Carlo) Gibbs algorithm seems to do a good job at sampling regions of the posterior distribution that are compatible with the data. But how do we assess the quality of the analysis? This is a huge topic covered in part in Kruschke’s book, Doing Bayesian data analysis, 2nd edition. He offers a diagnostic plotting function diagMCMC to investigate the behaviour of the chains.

Run diagnostic tools

We run diagMCMC on the mu parameter and obtain the figure below. Although Kruschke suggests to use this figure to determine if the chains have converged, it is not a very appropriate description, because strictly speaking Markov chains do not converge. So what does this figure actually tells us?

fig2_test_diag_mu.jpg

Figure 2. MCMC diagnostics. A Trace plots of iterations, or steps, of the 3 chains for parameter mu (the mean). The first 1,500 steps were discarded because of the default 500 adaptation steps and 1000 burn in steps. The strong overlap among the 3 chains suggest that they are representative of the posterior distribution. B Autocorrelation functions for lags 0 to 35 for the 3 chains. This gives an indication of the number of steps necessary for the chains to provide independent samples from the posterior distribution. Here it looks as if 5 steps between samples would be sufficient to obtain a more representative estimation of the posterior distribution. ESS is the effective sample size: it estimates the length of a chain without autocorrelation that would give us the same information as the current one. Here we used 10,000 steps, so an ESS of ~6,000 suggests redundancy among samples which are taken from very similar parts of the distribution. For an accurate description of the 95% HDI, an ESS of at least 10,000 is recommended, according to Kruschke; for particular applications it would be wise to validate that number. C Shrink factor over iterations (steps) in the chains. The shrink factor is a measure of between chain variance relative to within chain variance. A value close to 1 again suggests that the chains are sampling similar representative regions of the posterior distribution. A shrink factor \<1.1 is a recommended cut-off according to Kruschke. D Density plots of the parameter value for the 3 chains, and the 95% HDI for each chain. The 3 density plots and HDIs overlap well, suggesting again that the parameter values have been sampled from similar representative regions of the posterior distribution. MCSE is the Monte Carlo standard error – the lower the better. I do not understand yet how MCSE, ESS and shrink factors are calculated.

The 3 chains appear to be well mixed and to be sampling from a high-probability region. However there is some autocorrelation, and the ESS is ~6000. Autocorrelation can be associated with poor, unrepresentative, sampling of the posterior distribution. To see if this is a concern, we could run the MCMC analysis again, and compare the 95% HDIs. If they are very similar between analyses, then we have probably sampled sufficiently from similar representative portions of the posterior distribution, and there is nothing to worry about. An alternative is to compare the results from the 3 chains we ran independently: in panel D of the figure above, the 3 marginal distributions of the mean and their associated 95% HDIs are very similar, so the sampling was probably sufficient.

To decrease the autocorrelation, we could also thin the chains, or increase the number of steps. See the very useful discussion in Kruschke’s blog suggesting that thinning is a poor strategy against auto-correlation. It seems that introducing burn-in steps is also unnecessary.

Results with 5 thinning steps

Let’s try thinning anyway. I’ll report on better strategies once I have performed some simulations. In this example, using 5 thinning steps gets rid of the auto-correlation and might be sufficient to provide stable estimation of the mean and its 95% HDI:

fig3_test_with_thinning_diag_mu.jpg

Figure 3. MCMC diagnostics for analyses with 5 thinning steps.

The results show improved HDI consistency across the 3 chains, from already very consistent results. In this case, 5 thinning steps were sufficient to get an ESS around the expected 10,000, although running the same analysis again produced inconsistent results with ESS ranging from ~8000 to ~10,000. If our main interest is the mode and the 95% HDI of the mean of the samples from the posterior distribution, then thinning does not make much difference compared to the previous results. The required precision of the estimation depends on the application of course.

Results are robust to outliers

In the lab, we only adopt tools that are robust against outliers. Certainly, t-tests and ANOVAs on means are not robust. What about our new Bayesian tool? To check, we add outliers to our sample of fake data, and run the analysis again. The results are very convincing:

fig4_test_with_outliers.jpg

Figure 4. Parameter estimation given data with outliers.

Although the estimation of SD has changed, here our main interest is to estimate the mean of probable underlying distributions compatible with the data. And for the mean the results with outliers are almost identical to those without outliers. Thus, MCMC with a t distribution is robust: the estimation of the mean is barely different from the original, and within random fluctuations expected from running the same MCMC several times. You can try different combinations of outliers to convince yourselves: personally, I’m sold.

Impact of priors

How are the results affected by our choice of priors? Instead of BEST’s default settings, we can for instance run the analysis using a uniform prior on mean and a gamma prior on SD. We strongly expect onsets to be contained in the broad interval 0-200 ms, so we use a uniform distribution spanning that interval. This gives results very similar to the original ones, which is expected because by default BEST uses very broad priors.

fig5_test_with_unif_prior.jpg

Figure 5. Parameter estimation given uniform prior on the mean.

But what happens if we run the analysis with strong priors on the mean? Let’s assume that we expect onsets to have a mean around 80 ms, with little dispersion around that value (mean prior SD=3). In that case the posterior predictive samples for the mean look very different from those obtained using broad priors (Figures 1 & 5):

fig6_test_with_strong_priors

 

Figure 6. Posterior distribution of the mean given a strong 80 ms prior. The mode is used as central tendency of the distribution – although it can be more unstable than the median, it tends to be more representative. The expected value of 100 is indicated in green under the mode and marked by a green vertical dotted line. This value appears between 2 values: the percentage of the posterior distribution below and above the expected value. Under that, in red, is a [95, 105] ROPE, marked by vertical doted lines. The 3 percentage values indicate the percentage of the MCMC distribution below, inside, and above the ROPE. Finally, the 95% HDI is indicated by a thick horizontal black line at the bottom of the histogram.

That’s reassuring, because if we have strong prior knowledge about the underlying distribution from which the data might be sampled, a new experiment should not be able to override that knowledge. Here, in particular, the 95% HDI does not include 100, the sample mean. So should we conclude that 100 is not a credible value for the underlying distribution, given our prior knowledge and the current experiment? If we were dealing with processing speed estimates from EEG recordings, our decision should be guided by other physiological considerations. For instance, if we think that our measurements originate from one particular cortical area, and after considering conduction times  between areas & other factors (Salin & Bullier, 1995; Nowak & Bullier, 1997), we could use for instance a narrow margin of error of 5 ms on either side of 100 ms. We would then define a region of practical equivalence or ROPE of [95, 105]. Because the HDI does overlap with the proposed ROPE, we should probably reserve judgement about whether 100 is credible given the data.

Bayesian analysis of EEG onsets

Now that we’ve gained some confidence in the tools, let’s do a Bayesian analysis of our real dataset of EEG onsets. The variable ses1 contains onsets from 120 participants. ses2 contains onsets from 74 participants who also provided ses1 onsets. Although this is a paired design, and the results can be used to investigate test-retest reliability (see our EJN 2015 paper), here we treat the two sessions as independent.

First, let’s plot kernel density estimates for the two sessions:

fig7_kde_2sessions

Figure 7. Kernel density estimates for session 1 and session 2 observations.

Bayesian analysis on session 1 using BEST’s defaults

To start, we use BEST’s defaults. Before we consider the results, we call the diagnostic plotting function diagMCMC for mu, our main parameter of interest.

fig8_ses1_diag_mu

Figure 8. MCMC diagnostics for session 1 onset analysis using BEST’s defaults.

The diagnostic tools suggests the chains are well mixed and seem to be sampling from a high-probability region. However the ESS is ~6000, which calls for some thinning or more steps. So let’s try again with 20,000 steps:

fig9_ses1_with_more_steps_diag_mu

Figure 9. MCMC diagnostics for session 1 onset analysis using 20,000 steps.

ESS is now ~10000, and all the diagnostic indicators look good.  It seems we’ve got a good sampling of the posterior distribution. So let’s display the results, with a 100 ms onset for comparison:

fig10_ses1

Figure 10. Parameter estimation for session 1 onset data using 20,000 iterations.

Here we get a mode of 92.6 and 95% HDI [88.4, 96.7]. If we had the hypothesis of a mean onset of 100 ms, the results suggest a credible mean lower than 100 ms. Let’s plot only mean posterior samples:

fig11_ses1_mean_posterior

Figure 11. Session 1 onset posterior distribution of the mean.

A 95-105 ms ROPE overlaps with the 95% HDI, so we conclude that 100 ms is still a credible value.

Bayesian analysis on session 1 using informed priors

What happens if we use more informed priors? For instance, the literature suggests that face onsets are around 50-150 ms. Here are the results using a uniform distribution on that interval:

fig12_ses1_unifprior

Figure 12. Parameter estimation for session 1 data using a uniform prior.

And the mean posterior only, with a [95, 105] ROPE:

fig13_ses1_unifprior_mean_posterior

Figure 13. Session 1 onset posterior distribution of the mean given a uniform prior.

As expected with uniform priors, the results changed very little compared to using BEST’s default options.

Bayesian analysis of session 2 results

With our new posterior estimates, we can now study results from session 2 using more informed priors. We plugin estimates of the posterior samples from session 1 as priors for session 2. This makes a lot of sense given that session 1 has a large sample size (for this type of research) and is therefore our best description of the population we’re trying to estimate. Here are the results:

fig14_ses2_ses1prior

Figure 14. Parameter estimation for session 2 onset data given session 1 priors.

fig15_ses2_ses1prior_mean_posterior.jpg

Figure 15. Session 2 onset posterior distribution of the mean given session 1 priors.

Compared to session 1 estimation, we’ve gained information: the ROPE is now completely outside the 95% HDI, which is [87.3, 93.4], and the mode of the mean is 90.3. So 100 ms is no longer a credible onset value.

What do we get if we pretend that session 1 does not exist?

  • session 2 results with session 1 priors: 95% HDI = [87.3, 93.4], mode of the mean = 90.3 (Figure 15)

  • session 2 results with broad priors: 95% HDI = [83.2, 92.2], mode of the mean = 88.1

Because the results are very similar across the two sessions, using broad priors instead of session 1 priors give very similar results. However, session 2 estimates using broad priors are shifted to the left, because the session 2 data distribution suggests a slightly lower mean compared to session 1.

In our EJN paper, we reported these median onsets with 95% percentile bootstrap confidence intervals:
– session 1 = 92 ms [85, 99]
– session 2 = 85 ms [79, 92]
Our Bayesian estimation across sessions suggests that the underlying distribution’s most credible mean is 90 ms. The 95% HDI is also about twice narrower than the 95% confidence interval, so we’ve reduced the uncertainty about the mean.

The same procedure employed in this section can be used to analyse a new distribution of onsets from similar experiments. We have collected a smaller number of observations in a series of new experiments, and we will be analysing the results using as priors session 2 posteriors, themselves obtained given session 1 posteriors.

Comparison to a new point estimate

Let’s assume someone finds an onset at 50 ms, probably using group statistics, such that there is only a single point estimate available. Before writing a paper declaring “face processing takes 50 ms”, is 50 ms credible given our estimation from 2 large samples? Let’s assume a ROPE of 10 ms on each side of the point estimate, so that we would consider 50 ms not credible only if our previous 95% HDI does not contain the interval 40-60 ms at all. There are other ways to estimate the ROPE, for instance by sampling participants with replacement many times and compute the group estimate for every iteration. Given that most MEEG papers I have ever read report only a single value, it’s very likely that future studies will focus on point estimates. Here are the results with a [40, 60] ROPE:

fig16_ses2_comp_to_point_estimate

Figure 16. Comparison between a 50 ms point estimate + ROPE & session 2 onset posterior distribution of the mean.

Well, as you’ve guessed, the answer is no, 50 ms is not a statistically credible onset! It is also not physiologically credible! And as incredible as such a 50 ms processing speed claim might seem, it has been made before, and I’m received recently a paper making such a claim to review… Now I can point the authors to this blog. Go Bayes!