In psychology and neuroscience, the typical sample size is too small. I’ve recently seen several neuroscience papers with n = 3-6 animals. For instance, this article uses n = 3 mice per group in a one-way ANOVA. This is a real problem because small sample size is associated with:
- low statistical power
-
inflated false discovery rate
-
inflated effect size estimation
-
low reproducibility
-
…
Here is a list of excellent publications covering these points:
Button, K.S., Ioannidis, J.P., Mokrysz, C., Nosek, B.A., Flint, J., Robinson, E.S. & Munafo, M.R. (2013) Power failure: why small sample size undermines the reliability of neuroscience. Nature reviews. Neuroscience, 14, 365-376.
Colquhoun, D. (2014) An investigation of the false discovery rate and the misinterpretation of p-values. R Soc Open Sci, 1, 140216.
Forstmeier, W., Wagenmakers, E.J. & Parker, T.H. (2016) Detecting and avoiding likely false-positive findings – a practical guide. Biol Rev Camb Philos Soc.
Lakens, D., & Albers, C. J. (2017, September 10). When power analyses based on pilot data are biased: Inaccurate effect size estimators and follow-up bias. Retrieved from psyarxiv.com/b7z4q
See also these two blog posts on small n:
When small samples are problematic
Small sample size also prevents us from properly estimating and modelling the populations we sample from. As a consequence, small n stops us from answering a fundamental, yet often ignored empirical question: how do distributions differ?
This important aspect is illustrated in the figure below. Columns show distributions that differ in four different ways. The rows illustrate samples of different sizes. The scatterplots were jittered using ggforce::geom_sina
in R. The vertical black bars indicate the mean of each sample. In row 1, examples 1, 3 and 4 have exactly the same mean. In example 2 the means of the two distributions differ by 2 arbitrary units. The remaining rows illustrate random subsamples of data from row 1. Above each plot, the t value, mean difference and its confidence interval are reported. Even with 100 observations we might struggle to approximate the shape of the parent population. Without additional information, it can be difficult to determine if an observation is an outlier, particularly for skewed distributions. In column 4, samples with n = 20 and n = 5 are very misleading.
Small sample size could be less of a problem in a Bayesian framework, in which information from prior experiments can be incorporated in the analyses. In the blind and significance obsessed frequentist world, small n is a recipe for disaster.
Small sample sizes are also problematic in Bayesian statistics, since little info is contained in small samples. Except of course when 1 observation/participant is a “population” in itself, with many observations per participant.
LikeLike
This is cool! Thanks.
LikeLike
Pingback: Replication, low power and sample sizes: an update – David Schmidt
small sample size is a real problem in statistical analysis because people are not aware of the importance of information that could be generated by hidden ones, a new approach to solve this kind of problems is under construction, and after achieving enough simulation we can proceed to solve this kind of problems.
Statistical analysis apply simple techniques based on the simple observed average most of the time. but why we could look for more simple solution but more reliable than those applied,by researchers, in fact the simplest way to find solution could be by applying either mathematical techniques not a statistical one .or an appropriate statistical ones, in fact there are more than one solution to this kind of problems, we will proceed in the next few month to publish our results in this field.
LikeLike
Pingback: Small n correlations cannot be trusted | basic statistics
Pingback: Illustration of continuous distributions using quantiles | basic statistics
I found a solution for calculating a confidence interval for population parameters mean, variance and proportion in small sample size whatever the population distribution,
Bechara NAJ Hanna
LikeLike
Pingback: "Hvor skal du bo?"-artikler overser vigtigt problem
We all agree that small N is problematic, but how small is small? there is so much subjectivity in this area of statistics. We see many small N studies because the N’s are small to us but sufficient to whoever was reviewing the paper. Searching online will show you many “rules of thumb” for sample sizes for linear regressions, but they almost all include a simulation study plus a subjective element to the final formula and a warning that their recommendations are contextual. I refrain from judging other people’s sample sizes when there is no completely objective criteria for judging. As of now, all we can do is demand that authors should justify their sample size selection based on the few objective methods available e.g. power analysis.
LikeLike
Pingback: Digging for gold: is Mychal Mulder a treasure deep on the Warriors bench? - San Francisco Sports Today
Pingback: Why you should post your story to HN on the weekend (and it should be in Rust) - My Blog