**3.3k**wrote:

Hi all!

Just came across this paper (Francis et al), one from the new "too good to be true" trend, and now I'm totally confused.

The paper provides an estimate of excess success in a set of studies published in Science on a statistical basis, by utilizing reported effect and sample sizes as well as null hypothesis acception/rejection ration. Its quite intuitive that when authors support their finding by 20 t-tests with extremely low effect size and P-values very close to 0.05 the finding seems very questionable.

To investigate this problem, the authors have chosen a P-TES (Test for Excess Significance) metric, calculated as the product of statistical test success given its effect size, e.g.

"The estimated probability that five experiments like these would all produce successful outcomes is the product of the five joint probabilities, P-TES = 0.018."

As the probability of success is <= 1, given a paper with a long list of experiments, it is highly likely that we've end up with P-TES < 0.05. In other words, the P-TES score is heavily dependent on the complexity of phenomenon under study.

The authors suggest extending their methodology to check papers in the field of biology. As for bioinformatics, we usually provide lots of complementary analysis for a phenomenon under study. E.g. performing RNA-Seq, Methyl-Seq and ChIP-Seq under multiple conditions for a given transcription factor, checking for its motif over-representation, etc. Would this automatically render a thorough bioinformatics analysis having "excessive probability of success".

Am I missing something critical here??

**20**• written 4.9 years ago by mikhail.shugay ♦

**3.3k**