In line with a common thread through my blogs, I draw into question the notion that I/O psychology is indeed a science, let alone a pure science. The tag of a science is, in my opinion, a hindrance to our discipline’s progress. A selection of links to an I/O forum, posted by Professor Paul Barrett which when reviewed together, demonstrate the failures of psychology as a science or quantitative discipline. Extracts from these posts include:
Postman, N. (1984). Social science as theology. ETC: A Review of General Semantics, 41(1), 22-32.
‘….science is the quest to find the immutable and universal laws that govern processes, and it does so by making the assumption that there are cause and effect relations among these processes. The scientist uses mathematics to assist in uncovering and describing the structure of nature. At best, the sociologist, to take one example, uses mathematics merely to provide some precision to his ideas…I must tell you at the start that I reject the implications of the phrase ‘social science’ that is to say, I do not believe psychologists, sociologists, anthropologists, or media ecologists do science. I am fully persuaded that Michael Oakeshott’s distinction between processes and practices is definitive in explaining why this is the case … I believe with Oakeshott that there is an irrevocable difference between a blink and a wink. If it is a blink, we can classify the event as a process, meaning it has physiological causes which can be understood and explained within the context of established postulates and theories. If it is a wink, we must classify it as a practice, filled with personal and to some extent unknowable meanings and, in any case, quite impossible to explain or predict in terms of causal relations’.
Lilienfeld, S. (2010). Can psychology become a science? Personality and Individual Differences, 49(4), 281-288.
I am profoundly grateful to Tom Bouchard for helping me learn to think scientifically. Scientific thinking, which is characterized by a set of safeguards against confirmation bias, does not come naturally to the human species, as the relatively recent appearance of science in history attests. Even today, scientific thinking is in woefully short supply in many domains of psychology, including clinical psychology and cognate disciplines. I survey five key threats to scientific psychology – (a) political correctness, (b) radical environmentalism, (c) the resurrection of ‘common sense’ and intuition as arbiters of scientific truth, (d) postmodernism, and (e) pseudoscience – and conclude that these threats must be confronted directly by psychological science. I propose a set of educational and institutional reforms that should place psychology on firmer scientific footing.
Toomela, A. (2010). Quantitative methods in psychology: Inevitable and useless. Frontiers in Quantitative Psychology and Measurement, 1, 1-14.
Science begins with the question, what do I want to know? Science becomes science, however, only when this question is justified and the appropriate methodology is chosen for answering the research question. The research question should precede the other questions; methods should be chosen according to the research question and not vice versa. Modern quantitative psychology has accepted method as the primary focus; research questions are adjusted to the methods.
Schmitt, N. (1996). Uses and abuses of coefficient alpha. Psychological Assessment, 8(4), 350-353. and Barrett, P.T. and Paltiel, L. (1996). Can a single item replace an entire scale? POP vs the OPQ 5.2. Selection and Development Review, 12(6), 1-4.
These papers demonstrate the problems with the use of the 0.7-0.8 rule (for internal consistency) as a measure of an assessment’s quality.
Hurlbert, S.H., & Lombardi, C.M. (2009). Final collapse of the Neyman-Pearson decision-theoretic framework and the rise of the neoFisherian. Annales Zoologici Fennici, 46, 311-349.
The neoFisherian approach in a nutshell. From page 318, column, 1, 3rd paragraph:
A core principle of this neoFisherian paradigm, then, is that in testing situations, an alpha should not be specified, and terms such as ‘statistically significant’ and ‘statistically non-significant’ should not be used, nor should useless and misleading symbolic notation such as ‘ns’ and ‘P > 0.05’. The neoFisherian label seems appropriate for three reasons. First, Fisher clearly was moving toward this position at the end of his career. Second, his original conception of significance testing did not require specification of a critical P value even though he appended that superfluity to it for reasons essentially psychological, historical and accidental in nature. And third, other concepts formalized by Neyman and Pearson but that we regard as admissible under the neoFisherian paradigm — such as alternative hypotheses, power, and confidence intervals — were all implicit in Fisherian significance testing regardless of what Fisher said about them or of how unsuccessful his idea of “fiducial intervals” proved to be.