I would like to begin by apologising for not getting a myth out last month. I was working in the Philippines. Having just arrived back in Singapore I will make sure to get out two myths this month.
The first myth for April that I wish to highlight is a myth that some may see as almost to commit sacrilege in the industry. The idea that I wish to challenge is that I/O psychology can truly be classed as a strong measurement science. To be clear, I’m not saying that I/O is not a science or that it does not attempt to measure aspects of human behaviour related to work. Rather what I’m suggesting is that it is not measurement as the word is commonly used. The corollary is to talk of measurement in our field if it was similar to the common use of the term and in doing so give the discipline more predictability and rigour than it deserves.
The classic paper that challenged my thinking in regards measurement was ‘Is Psychometrics Pathological Science‘ by Joel Michell.
Abstract
Pathology of science occurs when the normal processes of scientific investigation break down and a hypothesis is accepted as true within the mainstream of a discipline without a serious attempt being made to test it and without any recognition that this is happening. It is argued that this has happened in psychometrics: The hypothesis, upon which it is premised, that psychological attributes are quantitative, is accepted within the mainstream, and not only do psychometricians fail to acknowledge this, but they hardly recognize the existence of this hypothesis at all.
In regards to measurement, Michell presents very clear and concise arguments about what constitutes measurable phenomena and why psychological attributes fail this test. While in parts these axioms are relatively technical, the upshot is that just because a quantitative variable is ordered does not itself constitute measurement. Rather, ‘measurement’ requires further hurdles to be adhered to. A broad example of this concept is additivity and the many associated operations that come when variables (or combinations) are added to produce a third variable, or provide support for an alternative equation. Psychological attributes fail on this and many other properties of measurement. As such, the basis for claims of measurement, in my opinion, are limited (or at least come with caution and disclaimers) and therefore the basis for much of the claim to being part of the ‘measurement-based-science’ school is not substantiated.
The limitations of the discipline as a measurement science is so fundamental that it should challenge the discipline far more than is currently so. The outcome should be both a downplaying of measurement practices and a greater focus on areas such as theory building which is then tested using a range of alternative methodologies. These same calls for the discipline have been made over the past few years and the disquiet in the discipline is growing:
Klein, S.B. (2014). What can recent replication failures tell us about the theoretical commitments of psychology? Theory and Psychology, 1-14.
Abstract
I suggest that the recent, highly visible, and often heated debate over failures to replicate results in the social sciences reveals more than the need for greater attention to the pragmatics and value of empirical falsification. It is also a symptom of a serious issue—the under-developed state of theory in many areas of psychology.
Krause, M.S. (2012). Measurement validity is fundamentally a matter of definition, not correlation. Journal of General Psychology, 16, 4, 391-400.
Abstract
….However, scientific theories can only be known to be true insofar as they have already been demonstrated to be true by valid measurements. Therefore, only the nature of a measure that produces the measurements for representing a dimension can justify claims that these measurements are valid for that dimension, and this is ultimately exclusively a matter of the normative definition of that dimension in the science that involves that dimension. Thus, contrary to the presently prevailing theory of construct validity, a measure’s measurements themselves logically cannot at all indicate their own validity or invalidity by how they relate to other measures’ measurements unless these latter are already known to be valid and the theories represented by all these several measures’ measurements are already known to be true….This makes it essential for each basic science to achieve normative conceptual analyses and definitions for each of the dimensions in terms of which it describes and causally explains its phenomena.
Krause, M.S. (2013). The data analytic implications of human psychology’s dimensions being ordinally scaled. Journal of General Psychology, 17, 3, 318-325.
Abstract
Scientific findings involve description, and description requires measurements on the dimensions descriptive of the phenomena described. …Many of the dimensions of human psychological phenomena, including those of psychotherapy, are naturally gradated only ordinally. So descriptions of these phenomena locate them in merely ordinal hyperspaces, which impose severe constraints on data analysis for inducing or testing explanatory theory involving them. Therefore, it is important to be clear about what these constraints are and so what properly can be concluded on the basis of ordinal-scale multivariate data, which also provides a test for methods that are proposed to transform ordinal-scale data into ratio-scale data (e.g., classical test theory, item response theory, additive conjoint measurement), because such transformations must not violate these constraints and so distort descriptions of studied phenomena.
What these papers identify is that:
- We must start with good theory building and the theory must be deep and wide enough to enable the theory to be tested and falsified.
- That construct validity is indeed important but correlations between tests are not enough. We need agreement on the meaning of attributes (such as the Big Five).
- That treating comparative data (such as scores on a normal curve) as if it were rigorous measurement is at best misleading and at worst fraud.
So where does this leave the discipline? Again, as is the theme threading through all these myths, we must embrace the true scientist/practitioner model and recognise that our discipline is a craft. To overly rely on quantitative techniques is actually extremely limiting for the discipline and we need alternative ways of conceptualising ‘measurement’. In this regard, I’m a big fan of the evaluation literature (e.g., Reflecting on the past and future of evaluation: Michael Scriven on the differences between evaluation and social science research) as providing alternative paradigms to solve I/O problems.
We must at the same time embrace the call for better theory building. If I/O psychology, and psychology in general, is going to have valuable contributions to the development of human thought it will start with good, sound theory. Just putting numbers to things does not constitute theory building.
When using numbers we must also look for alternative statistical techniques to support our work. An example is Grice’s (2011): Observation Oriented Modelling: Analysis of cause in the behavioural sciences. I looked at this work when thinking about how we assess reliability (and then statistical demonstrate it) and think it has huge implications.
Finally, when using numbers to substantiate an argument, support theory, or find evidence for an intervention we need to be clear on what they are really saying. Stats can lie and at best mislead and we must be clear as to what we are and are not saying, as well as the limitation in any conclusions we draw from reliance on data. To present numbers as if they had measurement robustness is simply wrong.
In the next blog, I want to discuss the myth of impartiality and why these myths continue to pervade the discipline.
Acknowledgement: I would like to acknowledge Professor Paul Barrett for his thought leadership in this space and for opening my eyes to the depth of measurement issues we face. Paul brought to my attention the articles cited and I’m deeply grateful for his impact on my thinking and continued professional growth.