
Studies You Should Know: General Cognitive Ability and Job Performance: A Contemporary Reassessment
The Question
Few findings in industrial and organisational psychology have been treated as more settled than this one: general cognitive ability (GCA) is the single best predictor of job performance. The figure most commonly cited in support of this claim comes from Schmidt and Hunter’s landmark 1998 meta-analysis, which reported a corrected validity of .51. That number has been repeated in textbooks, cited in court cases, and used to justify the use of cognitive ability tests in hiring decisions for decades. Sackett and colleagues (2024) ask a straightforward but important question: does that figure hold up when you look at data from the current century?
The Study
The meta-analysis draws on 153 samples comprising 40,740 participants, all from studies conducted in the 21st century. This is a deliberate design choice: the most prominent and influential data on the GCA-performance relationship are more than 50 years old, and the world of work has changed substantially since they were collected. Examining only contemporary data allows the authors to assess whether the relationship has remained stable or shifted over time (Sackett et al., 2024).
What They Found
The results tell a consistent story of a real but considerably smaller relationship than the canonical figures suggest.
The mean observed validity across all 153 samples was .16. After correcting for unreliability in the performance criterion and applying range restriction corrections to predictive studies, the mean corrected validity rose to .22, with a residual standard deviation of .11 (Sackett et al., 2024).
To put this in context: the .51 figure from Schmidt and Hunter (1998) has already been subject to significant methodological critique. Sackett and colleagues themselves published a reanalysis in 2022 that, integrating findings from prior 20th century meta-analyses, arrived at a corrected validity of .31, substantially lower than .51 but still considerably higher than the .22 the current study finds for 21st century data (Sackett et al., 2024).
The progression is worth stating plainly. The long-accepted figure was .51. A careful reanalysis of 20th century data suggested .31. Contemporary 21st century data now suggest .22. Each step represents a meaningful downward revision.
Why the Estimate May Have Declined
The authors do not claim to have definitively identified why the contemporary estimate is lower, but several candidate explanations are worth considering. The nature of job performance itself has changed, with more roles now involving complex interpersonal, creative, and adaptive demands that may be less strongly predicted by GCA than the largely manual and procedural jobs that dominated earlier datasets. The measurement of performance has also evolved, with broader and more multidimensional criteria that may attenuate the relationship with any single predictor. Selection practices themselves may also play a role: if organisations have increasingly used GCA-based screening over recent decades, range restriction in hired samples becomes more severe, potentially suppressing observed validities even after statistical correction (Sackett et al., 2024).
What the Finding Does and Does Not Mean
Sackett and colleagues (2024) are clear that GCA remains a meaningful predictor of job performance. A corrected validity of .22 is not trivial. In the context of personnel selection, where the base rate of performance differences is large and the cumulative value of even modest predictive validity compounds across thousands of hiring decisions, a validity of .22 still has practical value.
What the finding does undermine is the stronger version of the claim: that GCA is so powerful a predictor that it should be the centrepiece of selection systems, and that other predictors add only marginal value by comparison. A validity of .22 leaves a great deal of variance in performance unexplained, and positions GCA as one useful predictor among several rather than the dominant force it has sometimes been portrayed as.
Why It Matters
The practical and legal stakes of this question are substantial. Cognitive ability testing in hiring has been justified, defended, and challenged on the basis of its predictive validity. If the actual validity of GCA for contemporary job performance is closer to .22 than .51, the evidential basis for treating it as the gold standard of selection shifts considerably. Organisations, practitioners, and researchers who have been working from the older figures are working from an outdated picture, and the decisions built on that picture deserve revisiting.
More broadly, the study is a reminder that even the most robust-seeming findings in psychology require periodic re-examination with contemporary data. The world of work in the 1970s is not the world of work today, and assuming that relationships established in one era will hold unchanged in another is an empirical claim, not a safe assumption.
Excerpt
The idea that cognitive ability is the gold standard predictor of job performance has been a cornerstone of occupational psychology for decades. A 2024 meta-analysis of 21st century data finds the relationship is real but considerably smaller than the figures most practitioners have been working from.
Reference
Sackett, P. R., Demeke, S., Bazian, I., Griebie, A., Priest, R., & Kuncel, N. (2024). A contemporary look at the relationship between general cognitive ability and job performance. Journal of Applied Psychology. Advance online publication. https://doi.org/10.1037/apl0001159
