Two years ago when this blog was started, the key driver was to have a forum, not only to discuss best practice in I/O psychology but to also question many of the conventional wisdom’s held in this field. In particular, the relationship between science and rigour was noted as something that was too often more rhetoric than reality.

Since my last post back in 2010, I have been following the press on scientific integrity in psychology. I am pleased to see that the veil is being lifted on the emperor’s clothes of “value-free science” and ” true measurement”, as it is applied to I/O psychology research. Prediction of human behaviour is fraught, and any claim of a respectable level of prediction is often a sign of ignorance rather than science. However, the discipline continues this practice with ironically predictable outcomes; fraud, pseudo-science, and marketing hype. The by-product is that real science is reduced to a secondary concern with core concepts such as replication being lost in the demand for the next fad.

Fraud is now well recognised in the industry; there have been a number of high-profile cases (e.g. Dutch Psychologist Accused of Research Fraud). The industry has found itself wanting, not only by way of institutional drivers (Lawrence, 2009) but also in the fundamental issue of replication (Replication). This has resulted in the demand for replication studies, but as noted in Science, the discipline is trying to do something about it:

“In an even more daring effort, a group of more than 50 academic psychologists, which calls itself the Open Science Collaboration (OSC), has begun an unprecedented, large-scale project to systematically replicate psychological experiments recently published in leading journals. “We’re wringing our hands worrying about whether reproducibility is a problem or not,” says psychologist Brian Nosek of the University of Virginia in Charlottesville, who is coordinating the effort. “If there is a problem, we’re going to find out, and then we’ll figure out how to fix it.”

This is great news and can be seen as a shining light against a system that systematically discourages the bedrock of science, replication, as well as pandering to the need for ‘new findings’ that are aligned to non-discussed ‘grant outcomes’ (see Fanelli, 2011). A “publish or perish” mentality results in scientists aiming to publish as many articles as they can, rather than discover and disseminate the truth, (Ioannidis 2005; Simmons, Nelson, & Simonsohn, 2011) and is why one can understand what those studying replication are up against, and what they are trying to achieve.

I maintain that at the heart of this problem is the unwillingness to recognise I/O Psychology is an applied craft, that wants dearly to be a science. As is often the case in recent years the critique of the science of psychology is eloquently addressed by Joel Michell (2012):

“The inference from order to quantity is fundamental to psychometrics because the sorts of attributes that psychometricians aspire to measure are experienced directly only as ordered and, yet, it is concluded that such attributes are measurable on interval scales (i.e., that they are quantitative). This inference has been a feature of psychometrics since early last century, before which it permeated scientific thought and played a role in the development of psychophysics. Despite this, its cogency has been analysed only rarely. Elsewhere, I have argued that it is not deductively valid, a point that might be considered obvious except that attempts have been made to show otherwise. Its invalidity displayed, it is easily shown that it is not inductively reasonable either. However, it might still be urged that the inference from order to quantity is an inference to the best explanation: that is, that quantitative structure is reasonably abduced from order. I argue that the opposite is true: the most plausible hypothesis is that the sorts of attributes psychometricians aspire to measure are merely ordinal attributes with impure differences of degree, a feature logically incompatible with quantitative structure. If so, psychometrics is built upon a myth”.

This myth then extends itself to the definition of psychometric measurement, including the increasing use of “smoke and mirror techniques” that are used to say we are getting closer to “true measurement” in I/O psychology. Psychometrics are useful tools for people to explain themselves to others using a common semantic code on a comparative scale. Calling this “true measurement” is a step too far! The idea of prediction without some heavy disclaimers is a gigantic leap of logic!

References

Carpinter, S., (2012). Psychology’s Bold Initiative: In an unusual attempt at scientific self-examination, psychology researchers are scrutinizing their field’s reproducibility. Science, 335, 1558-1561. (access here).

Fanelli, D. (2011). Negative results are disappearing from most disciplines and countries. Scientometrics. (access here).

Ioannidis, J.P. (2005) Why most published research findings are false. PLoS Medicine, 2(8). 696-701.

Michell, J., (2012). The constantly recurring argument: Inferring quantity from order. Theory and Psychology, 22(3), 255-271.

Lawrence, P.A. (2009). Real Lives and White Lies in the Funding of Scientific Research: The granting system turns young scientists into bureaucrats and then betrays them. PLoS Biology, 7(9), 1-4. (access here).

Simmons, J.P., Nelson, L.D., & Simonsohn, U. (2011). False-Positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychological Science, 22(11), 1359-1366.