Readers of my blog often ask why I have such a bee-in-my-bonnet, about measurement and replications. The measurement question I have answered in a previous blog. In short, I think the claims to measurement in our discipline are on shaky ground to put in politely. As such, I often think that we should be focussed more on the evaluation of usefulness rather than infinitesimally small gains in measurement accuracy, which are themselves seemingly futile

While exaggerated claims to measurement exist, this does not mean that as a discipline, we should be putting a primacy on the replication of results. Replication is the bedrock of our discipline. We have drawn too many psychological phenomena from single, not replicated studies, and this has done extensive damage to our discipline.

In a surprising turn, the US military is looking to use artificial intelligence to identify credible research from that which is less credible, a bullshit detector if you will. The logic behind the tool is that their markers in research that indicate that it is less likely to hold up to replication. Indeed, earlier research cited in the article notes that peer review is fairly accurate at predicting whether something will replicate or not.

For me the problem we are continuing to cloud the problem. What is required is good science. Studies should be designed to be a continuation of previous research, in turn establishing replication by design. As a discipline, we need to embrace that we are a science and a philosophy and as such, rely as much on or good logic as the statistical defence of esoteric findings. We must engage in ecologically valid studies, however hard that might be.

Replication is important, and any steps made to increase our credibility in the public’s eye is a positive move. But let us not forget that we are first and foremost scientists and philosophers of the brain and behaviour and our best defence will always be quality and meaningful research.


Christodoulou, Evangelia et al. (2019). A systematic review shows no performance benefit of machine learning over logistic regression for clinical prediction models  Journal of Clinical Epidemiology, Volume 110, 12 – 22