The problem with science is that so much of it simply isn’t

This is the opening sentence in an article titled “Scientific Regress” by William Wilson.The article is about science and the repeatability of scientific results in the published literature. (Indented paragraphs are quoted from this article, unless otherwise referenced.)

Scientific claims rest on the idea that experiments repeated under nearly identical conditions ought to yield approximately the same results, but until very recently, very few had bothered to check in a systematic way whether this was actually the case.

A group called Open Science Collaboration (OSC) tried to check claims by replicating results of certain science experiments. They checked one hundred published psychology experiments and found 65% failed to show any statistical significance on replication, and many of the remainder showed greatly reduced effect sizes. The OSC group even used original experimental materials, and sometimes performed the experiments under the guidance of the original researchers.

They found though that the problem was not just in the area of psychology, which I don’t even consider hard science anyway.

In 2011 a group of researchers at Bayer decided looked at 67 recent drug discovery projects based on preclinical cancer biology research. They found that in more than 75% of cases they could not replicate the published data. And these data were published in reputable journals including Science, Nature, and Cell.nature

The author suggested that the reason many new drugs were ineffective may possibly be because the research on which they were based was invalid.  This was considered the reason for the failure–the original findings are false.

Then there is the issue of fraud.

In a survey of two thousand research psychologists conducted in 2011, over half of those surveyed admitted outright to selectively reporting those experiments which gave the result they were after.

This involves experimenter bias. The success of a research program might be all that is required for success in the next funding round. So, what might start as just a character weakness in the experimenter ends up being outright fraud. The article states that many have no qualms in

… reporting that a result was statistically significant when it was not, or deciding between two different data analysis techniques after looking at the results of each and choosing the more favorable.

Continue reading