In last month’s post, I signed off by noting that impartiality was a pervasive myth in the industry. The corollary is that assuming impartiality allows many of the myths in the industry to not only continue but flourish. Very few in the industry can lay claims to being completely impartial, yours truly included. The industry at all levels has inherent biases that any critical psychologist must be mindful of. The bias starts at with university and research and then the myth is passed, often by practitioners on to the consumer (be that person or organisation).

A colleague recently sent me a short paper that I think is compulsory reading for anyone with a critical mind in the industry. The article uses the metaphor of Dante’s Inferno to discuss the demise of science. Keeping with the theme, I would like to use another biblical metaphor of the Four Horseman of the Apocalypse in reference to the myth of impartiality. These Horsemen represent the four areas where impartiality is professed but often not practised, resulting in a discipline that fails to deliver for its followers the Promised Land being touted. The Four Horsemen in this instance are: University, Research, Practitioners, and Human Resources.

Unlike the biblical version, destiny is in our hands and I want to continue to present solutions rather than simply highlight problems. Thus, each of the Four Horseman of impartiality can be defended (or at least be inflicted with a flesh wound) with some simple virtuous steps that attack the myth of impartiality. Sometimes these steps require nothing more than acknowledging that the science and practice of psychology is not impartial. Other times we are called to address the impartiality directly. Because of the length of the topic, I will break this into two blogs for our readers.

Universities

Many universities are best thought of as corporations. Their consumers are students. Like any other corporation, they must market to attract consumers (students) and give students what they want (degrees). To achieve this end a factory type process is often adopted; which in the world of education often means teaching, and having students repeat and apply rules. Moreover, students want to at least feel that they are becoming educated and numbers and rules provide this veil. Finally, the sheer complexity of human behaviours means that restrictive paradigms for psychology are adopted in opposition to a deep critical analysis of the human condition. This in turn gives the much-needed scale required to maximise the consumer base (i.e., easy to digest product, respectability, capacity to scale the production (education) for mass consumption).

For this reason, Psychology is often positioned purely as a science, which it is not. This thinking is reinforced by an emphasis on quantitative methodologies reinforcing the myth of measurement. Papers are presented without recognising the inherent weaknesses and limitations of what is being discussed. Quality theoretical thinking is subordinated to statistics. The end result is that while university is presented as an impartial place of learning, this ignores the drivers for impartiality that are inherent in the system. Often the rules of learning that are created to drive the learning process do so to meet the needs of the consumer and increase marketability and the expense of impartial education. Those who come out of the system may fail to fully appreciate the limitations in their knowledge, and as the saying goes ‘a little knowledge is dangerous’.

University is the most important of the Four Horsemen of impartiality as it is within universities that many of the other myths are generated. By training young minds in a way of thinking and appearing impartial, universities create ‘truths’ in the discipline that are simply a limited way of viewing the topic. This results in myths like the myth of measurement (and various conclusions drawn from research), that become accepted as truth and students graduate with faulty information or overconfidence in research findings. Those who do not attend university, but hold graduates with a degree of esteem, likewise fail to understand that they are now also victims of a myth of impartiality.

The virtuous steps

This blog is too short to address all the shortcomings of universities in the modern environment. However, if we don’t, we will lose more and more quality researchers and teachers from our ranks [see: http://indecisionblog.com/2014/04/07/viewpoint-why-im-leaving-academia/]. What I suggest is that Psychology re-embrace its theoretical roots by being more multi-discipline in its approach, incorporating science and statistics with the likes of philosophy and sociology.

The second step is to make compulsory a course in ‘Critical Psychology’. This would in turn go beyond the sociopolitical definition of critical psychology often given and focus on issues of critique as discussed in these blogs. These would include: issues of measurement, the role of theory, the problems of publish or perish, etc. In short, a course that covers the problems inherent in the discipline; acknowledging that these are things that every psychologist, applied or researching, must be mindful of. For the Universities already taking these steps in a meaningful way, I commend you.

Research

The idea that research is impartial has been dismissed some time ago by all but the most naïve. The problem is not so much one of deliberate distortion, although this can be problematic also as we will see later on. Rather it is the very system of research that is not impartial.

Firstly there is the whole ‘publish or perish’ mentality that pervades all those that conduct research, whether academics or applied psychologists. Researchers are forced by market drivers or university standards to publish as much as possible as ‘evidence’ that we are doing our job. The opportunity cost is simply that quality research is often in short supply. For one of the best summaries of this problem I draw your attention to Trimble, S.W., Grody, W.W., McKelvey, B., & Gad-el-Hak, M. (2010). The glut of academic publishing: A call for a new culture. Academic Questions, 23, 3, 276-286. There are many powerful points made in this paper and some of the key points are that quality research takes time and is counter to the ‘publish or perish’ mentality. Moreover, a real contribution often goes against conventional wisdom and therefore puts one in the direct firing of many current contemporaries.

Why does this glut occur? I can think of three key reasons.

The first is that researchers are often graded by the quantity, not quality, of the work they produce. The general public tends not to distinguish between grades of journals, and academic institutions have key performance indicators that require a certain number of publications per year.

The second reason is that journals create parameters by which research will be accepted. I have discussed this topic to death in the past, but evidence of bias include favouring novel findings to replication, favouritism to papers that reject the null hypothesis, and numbers as the criteria of supporting evidence over logic and theory. This in turn creates a body of research that projects itself as the body-of-knowledge in our discipline when in reality it is simply a fraction, and distorted fraction at that, of how we understand human complexity (c.f. Francis, G. (2014). The frequency of excess success for articles in Psychological Science. Psychonomic Bulletin and Review (In Press; http://www1.psych.purdue.edu/~gfrancis/pubs.htm),1-26).

Abstract

Recent controversies have questioned the quality of scientific practice in the field of psychology, but these concerns are often based on anecdotes and seemingly isolated cases. To gain a broader perspective, this article applies an objective test for excess success to a large set of articles published in the journal Psychological Science between 2009-2012. When empirical studies succeed at a rate higher than is appropriate for the estimated effects and sample sizes, readers should suspect that unsuccessful findings were suppressed, the experiments or analyses were improper, or that the theory does not properly account for the data. The analyses conclude problems for 82% (36 out of 44) of the articles in Psychological Science that have four or more experiments and could be analyzed.

The third reason is funding. Where money is involved there is always a perverse incentive to distort. This occurs in universities where funding is an issue, and through industry where a psychologist may be brought in to evaluate such an intervention. The reasons are obvious and are often more subtle than straight distortion. For example, universities that require funding from certain beneficiaries may be inclined to undertake research that, by design, returns positive findings in a certain area, thus being viewed positively by grants committees. The same may be true in industry where an organisational psychology company is asked to evaluate a social programme but the terms of the evaluation are such that the real negative findings (such as opportunity cost) are hidden. This had led to calls for transparency in the discipline, such as in Miguel, E., Camerer, C., Casey, K., Cohen, J., Esterling, K., Gerber, A., Glennerster, R.,Green, D., Humphreys, M., Imbens, G., Laitin, D., Madon, T., Nelson, L., Nosek, B.A., …, Simonsohn, U., & Van der Laan, M. (2014). Promoting transparency in social science research. Science, 343, 6166, 30-31. While the paper makes a strong argument for quality design it also notes the trouble with previse incentives:

Accompanying these changes, however, is a growing sense that the incentives, norms, and institutions under which social science operates undermine gains from improved research design. Commentators point to a dysfunctional reward structure in which statistically significant, novel, and theoretically tidy results are published more easily than null, replication, or perplexing results (3, 4). Social science journals do not mandate adherence to reporting standards or study registration, and few require data-sharing. In this context, researchers have incentives to analyze and present data to make them more “publishable,” even at the expense of accuracy. Researchers may select a subset of positive results from a larger study that overall shows mixed or null results (5) or present exploratory results as if they were tests of pre-specified analysis plans (6).

Then there are the outright frauds (see: http://en.wikipedia.org/wiki/Diederik_Stapel). For those who have not read this in other blogs I urge you to look at this New York Times interview. My favourite quote:

“People think of scientists as monks in a monastery looking out for the truth,” he said. “People have lost faith in the church, but they haven’t lost faith in science. My behavior shows that science is not holy.”… What the public didn’t realize, he said, was that academic science, too, was becoming a business. “There are scarce resources, you need grants, you need money, there is competition,” he said. “Normal people go to the edge to get that money. Science is of course about discovery, about digging to discover the truth. But it is also communication, persuasion, marketing. I am a salesman…

The virtuous steps

To address this issue of impartiality of research we need a collective approach. Universities that have a commitment to research must aim for quality over quantity and allow researchers the time to develop quality research designs that can be tested and examined over longer periods. Research committees must be multi-disciplinary to make sure that a holistic approach to research prevails.

We must have arm’s-length between funding and research. I don’t have an answer for how this would occur, but until it does universities will be dis-incentivised to conduct fully impartial work. Journals need to be established that provide an outlet for comprehensive research. This will see a removal of word limits in favour of comprehensive research designs that attempt to cover more for alternative hypothesis to be tested and dismissed. Systems thinking needs to become the norm and not the exception.

Finally, and most importantly, our personal and professional ethics must be paramount. We must contribute to the body of knowledge that is critiquing the discipline for the improvement of psychology. We must make sure that we are aware of any myth of impartiality in our work and make this explicit while trying to limit its effect on our work; whether it is as a researcher or applied. We must challenge the institutions (corporate and universities) we work for to raise the game, in incremental steps

In Part Two, I will take a critical look at my industry, psychometric testing and applied psychology, and how the myth of impartiality is prevalent. I will also discuss how this is then furthered by those who apply our findings within Human Resource departments.

Read Part 2 here