As I/O psychologists we are extremely reliant on the accuracy of the data that is presented to us. Decisions are made on the basis that what is presented is indeed factual and accurate. But how much data is ever cross-examined? How much ‘faith’ can we put in data that is conducted by test producers? What independent bodies ever scrutinise the data we are presented with?
These questions are surprisingly rarely asked and data is perhaps too often taken as ‘true’ without any deeper enquiry. A recent study by Daniele Fanelli (reviewed in the Economist June 6, 2009) brings into question the fidelity of scientific data noting, that enhancement to data and findings is far more common than people might think.
Fanelli conducted a meta-analysis of surveys investigating scientific honesty, analysing 18 studies on the topic. His findings indicate that while admission of outright fraud was low (2%) about 10% confessed to questionable practices such as ‘dropping data points’ or ‘failing to present data that contradicts one’s previous research’. Moreover, 14% had seen colleagues falsify data and a whopping 46% noting they knew of colleagues involved in methodologies that were questionable.
What I found interesting about this study is that it relies on self-report. This would indicate that this represents merely the tip of the iceberg of questionable science. With respect to test publishers, where there is a vested interest in finding supporting results, it is anyone’s guess how much of the results we see represent real effects. I think the message is that all science should be examined with a critical mind and perverse incentives (whether for personal or commercial gain) should always be considered before placing too much substance on the results.