Well there it is: another year down and another year to look forward to. This brings to an end this series on some of the myths of our industry and I wanted to finish by summarising some guidelines on how to become more critical about i/o research and the conclusions drawn from our discipline.

Our discipline is not all mythology, as shown in some of my recent posts such as the effectiveness of training and the value of personality testing. On the contrary, there is a growing body of findings that show what works, what doesn’t and why. However, claims move from fact to fiction when commercialisation and academic reputation takes over.

With this in mind, those attempting to apply research need a simple way to test the soundness of what they are reading. Here are my top 7 tips to spotting myths:

  1. Who has done the research? There are many vested interests in psychology. These include commercial firms touting the next big thing through to academics defending a position they have built for themselves. When you understand a person’s starting position you will read what they write with open eyes. When evaluating any claim ask yourself: ‘What is their angle and do they have anything to gain from such a claim? Are they presenting a balanced argument or reporting commercial findings in fair manner.’
  2. Are the claims too good to be true? Dealing with human behaviour is a messy business. Single variables, on a good day with the wind blowing the right direction, account for roughly 10% of the variability (e.g. correlations of r=0.3) in a given outcome (e.g. a personality trait predicting job performance). Unfortunately, the public are unaware of this and have expectations around prediction that are simply unrealistic. These expectations are then played on by marketing companies that make claims such as ‘90% accuracy’. These claims are outrageous and a sure sign that you are very much again in the clutches of a myth.
  3. When looking at applied studies does the research design account for moderator variables? Psychological research often fails to be useful by failing to account for moderator variables. Too often we get simple correlations between variables without recognising that the entire finding is eroded unless certain conditions are met or if another variable enters the scene.
  4. Is the research discussed as part of a system? Building on from the previous point, research that does not discuss their findings as part of a wider eco-system is invariably limited. As scientist-practitioners, our work does not exist in a vacuum. It is part of a complex set of ever changing intertwining variables that go together to produce an outcome. Selection leads to on-boarding, leads to training, leads to performance management and so on and so forth. Research needs to identify this system and report findings accordingly.
  1. Are the results supported by logic as well as numbers? Nothing can blind the reader of i/o science like numbers. As the sophistication of mathematical justification in our discipline has grown the usefulness of many of the studies has dropped. Psychology is as much a philosophy as a science and logic is equally as important numbers to demonstrating an evidence base. Look for studies that follow the laws of logic; where hypotheses are not only supported, but alternative theories dismissed.Look for studies that are parsimonious in their explanation but not so simplistic that they fail to account for the underlying complexity of human behaviour.
  2. Are the results practically meaningful? Don’t be confused by statistical significance. This simply means we have certain confidence levels that a finding was not due to chance and if the study is repeated we are likely to get a similar result. This tells us nothing of the practical significance of the finding (i.e. How useful is this finding? How do I use it?). Too often I see tiny but statistically findings touted as a ‘breakthrough’. The reality is the finding is so small that it is meaningless unless perhaps applied to huge samples.
  3. Be critical first, acquiescence second! If I have one piece of advice it is to be critical first and accept nothing until convinced. Don’t accept anything because of the speaker, the company, or the numbers. Instead make anyone and everyone convince you. How is this done? Ask why. Ask what. Ask how. If you do nothing besides taking this stance as part of a critical review, it will help to make you a far more effective user of researcher and a far better i/o psychologist or HR professional.

To all those who have read and enjoyed this blog over the year, we at OPRA thank you. As a company we are passionate about i/o, warts and all, and it is a great privilege to contribute to the dialogue that challenges our discipline to be all that it can be. Have a great 2015 and we look forward to catching up with you offline and online over the year.