Validity is perhaps one of the most misunderstood concepts in HR analytics and psychometrics in particular. This is a topic that I have previously written about on this blog but the message has yet to fully resonate with the HR community. The most common question that OPRA gets asked in relation to any solution we sell, be it an assessment, survey or intervention, continues to be “What is the validity?”
On the face of it, this is a perfectly reasonable question. However, when probed further, it becomes clear that there remains a gap in understanding what validity translates to in terms of business outcomes. The answer to this question is invariably a validity coefficient rolled off the tongue that then suffices some checkbox prescribed for decision making.
Instead of asking about validity, the real question should be “How useful is this assessment or intervention?” This leads to a more focused question of how useful this assessment or intervention is for the problem (or problems) wished to be solved. Thus the question is not one of simply producing a number to represent artificial validity criteria but is reframed to the business imperative of usefulness.
Ironically this in no way limits the rigour within which an assessment, intervention or survey will be evaluated. On the contrary, the bar now becomes far higher. The reality is that obtaining suitably respectable correlations such as r=0.3 between some measure and an outcome is not particularly difficult. What is far more difficult is to put this figure in the context of a system to in turn determine usefulness. Fortunately, there are frameworks that help in this thinking.
One in particular that I’m drawn to is the Key Evaluation Checklist developed by Professor Michael Scriven. This concept has been furthered by more recent work in the field. The point being that usefulness combines statistics, logic and systems thinking to establish the merit, worth and significance of what is being evaluated. This provides a far more systematic way of thinking about validity and provides an applied framework for determining usefulness.
Likewise, when we are interested in truly understanding construct validity, the standard correlations between like measures simply do not suffice. What is required is to understand the nomological network in which the construct exists and demonstrate not only what it correlates with (convergent validity) but also what it can be discriminated from (discriminant validity). In this way, we build a deeper understanding of the construct of interest, and in turn, increase our understanding of its usefulness.
In the world of HR analytics and big data, we sometimes forget that usefulness, not validity, is our ultimate goal. Number crunching is but a tool in this process. What is more important is an understanding of the framework in which usefulness will be demonstrated. When good analytics meet good applied thinking and methodologies, progress will ensure. But to confuse validity as an end-point is to render any HR analytics project irrelevant to adding value to an organisation.