For those that may not be aware, the ‘Science of Science’ is in disarray. Everything is currently under the microscope as to what constitutes good science, what is indeed scientific and the objectivity and impartiality of science. This is impacting many areas of science and has even led to a Noble Prize winner boycotting the most prestigious journals in his field.

Nobel winner declares boycott of top science journals: Randy Schekman says his lab will no longer send papers to Nature, Cell and Science as they distort the scientific process.

This pervading problem in the field of science is perhaps best covered in the highly cited Economist article ’How Science Goes Wrong.

This questioning of science is perhaps no more apparent than in our discipline of I/O psychology. Through various forums and academic and non-academic press, I have been made increasingly aware of the barrage of critical thinking that is going on in our field. The result: much of what we have taken to be true as I/O psychs are nothing more than fable and wishful thinking.

Over this year I want to explore one myth each month with readers of our blog. They will be myths about the very heart of I/O Psychology that are often simply taken as a given.

The idea of attacking myths has long been central to OPRA’s philosophy:

  • The Myths and Realities of Psychometric Testing
  • Lies, Lies and Damn Business Psychology
  • The Truth About Competencies: Who’s Conning Who?

And there are many myth-busting blogs in this forum:

To kick off this new series I wish to start with the current state of play in the field. In particular, the fundamental problem that questionable research practices often arise when there is an incentive for getting a certain outcome.

John, L.K., Loewenstein, G., & Prelec, D. (2012) Measuring the prevalence of questionable research practices with incentives for truth-telling. Psychological Science OnlineFirst. A description of the study with comments at: http://bps-research-digest.blogspot.co.nz/2011/12/questionable-research-practices-are.html

This results in a well-known fact to all in the publish or perish game that your best chance of getting published is not necessarily the quality of the research but rather is correlated with a null hypothesis not being supported (i.e. you have a ‘eureka’ moment, however arbitrary).

Fanelli, D. (2010) “Positive” results increase down the hierarchy of the sciences. http://www.plosone.org/article/info:doi%2F10.1371%2Fjournal.pone.0010068

Gerber, A.S., & Malhotra, N. (2008) Publication bias in empirical sociological research: Do arbitrary significance levels distort published results? Sociological Methods & Research, 37, 1, 3-30.

The output is that the bulk of the research in our area is trivial in nature, is not replicated and simply does not support the claims that are being made. This is especially the case in psychology where the claims often go from exaggerated to the absurd.

Francis, G. (2013). Replication, statistical consistency, and publication bias. Journal of Mathematical Psychology, EarlyView, 1-17. http://dx.doi.org/10.1016/j.jmp.2013.02.003

Scientific methods of investigation offer systematic ways to gather information about the world, and in the field of psychology application of such methods should lead to a better understanding of human behavior. Instead, recent reports in psychological science have used apparently scientific methods to report strong evidence for unbelievable claims such as precognition. To try to resolve the apparent conflict between unbelievable claims and the scientific method many researchers turn to empirical replication to reveal the truth. Such an approach relies on the belief that true phenomena can be successfully demonstrated in well-designed experiments, and the ability to reliably reproduce an experimental outcome is widely considered the gold standard of scientific investigations. Unfortunately, this view is incorrect; and misunderstandings about replication contribute to the conflicts in psychological science. …… Overall, the methods are extremely conservative about reporting inconsistency when experiments are run properly and reported fully.

The paucity of quality scientific research is leading to more and more calls for fundamental change in how what qualifies as good science and research in our field.

Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K., Gerber, A., Glennerster, R.,Green, D., Humphreys, M., Imbens, G., Laitin, D., Madon, T., Nelson, L., Nosek, B.A., …, Simonsohn, U., & Van der Laan, M. (2014). Promoting transparency in social science research. Science, 343, 6166, 30-31.

“There is growing appreciation for the advantages of experimentation in the social sciences. Policy-relevant claims that in the past were backed by theoretical arguments and inconclusive correlations are now being investigated using more credible methods. ….Accompanying these changes, however, is a growing sense that the incentives, norms, and institutions under which social science operates undermine gains from improved research design. Commentators point to a dysfunctional reward structure in which statistically significant, novel, and theoretically tidy results are published more easily than null, replication, or perplexing results”.

D.C., Levine, J.M., Mackie, D.M., Morf, C.C., Vazire, S., & West, S.G. (2013). Improving the dependability of research in personality and social psychology: Recommendations for research and educational practice. Personality and Social Psychology Review, EarlyView, 1-10.

In this article, the Society for Personality and Social Psychology (SPSP) Task Force on Publication and Research Practices offers a brief statistical primer and recommendations for improving the dependability of research. Recommendations for research practice include (a) describing and addressing the choice of N (sample size) and consequent issues of statistical power, (b) reporting effect sizes and 95% confidence intervals (CIs), (c) avoiding “questionable research practices” that can inflate the probability of Type I error, (d) making available research materials necessary to replicate reported results, (e) adhering to SPSP’s data sharing policy, (f) encouraging publication of high-quality replication studies, and (g) maintaining flexibility and openness to alternative standards and methods. Recommendations for educational practice include (a) encouraging a culture of “getting it right,” (b) teaching and encouraging transparency of data reporting, (c) improving methodological instruction, and (d) modeling sound science and supporting junior researchers who seek to “get it right.”

Cumming, G. (2013). The New Statistics: Why and how. Psychological Science, EarlyView, 1-23.

“We need to make substantial changes to how we conduct research. First, in response to heightened concern that our published research literature is incomplete and untrustworthy, we need new requirements to ensure research integrity. These include pre-specification of studies whenever possible, avoidance of selection and other inappropriate data-analytic practices, complete reporting, and encouragement of replication. Second, in response to renewed recognition of the severe flaws of null-hypothesis significance testing (NHST), we need to shift from reliance on NHST to estimation and other preferred techniques. The new statistics refers to recommended practices, including estimation based on effect sizes, confidence intervals, and meta-analysis. The techniques are not new, but adopting them widely would be new for many researchers, as well as highly beneficial. This article explains why the new statistics are important and offers guidance for their use. It describes an eight-step new-statistics strategy for research with integrity, which starts with formulation of research questions in estimation terms, has no place for NHST, and is aimed at building a cumulative quantitative discipline.”

But it is not all doom and gloom. There are simple steps that the Scientist/Practitioner can take to make sure that sense and sensibility are more pervasive in the field. In this regard I offer 3 key simple principles:

  1. Try your best to keep up to date with the literature: OPRA will do their best to publish relevant pieces that come to their attention via this blog!
  2. Don’t make exaggerated claims: Remember that no one has ‘magic beans’ as ‘magic beans’ do not exist. Dealing with human problems invariably involves complexity and levels of prediction that are far from perfect.
  3. Accept our discipline is a craft, not a science: I/O psychology involves good theory, good science, and sensible qualitative and quantitative evidence but is often applied in a unique manner, as would a craftsman (or craftsperson – if such a word exists). Accepting this fact will liberate the I/O Psychologist to use science, statistics and logic to produce the solutions that the industry, and more specifically, their clients require.

Keep an eye on our blog this coming year for exploring myths and other relevant information or products related to our field. Let us know if something is of interest to you and we can blog about it or send you more information directly.