Meta-Analysis of Genuine and Fake p-Values

Read the full article See related articles

Listed in

This article is not in any list yet, why not save it to one of your lists.
Log in to save this article

Abstract

For Sir Ronald Fisher, it is important to consistently obtain significant p-values to support an experimental hypothesis. So, replicating experiments to obtain independent p-values is a legitimate and desirable research practice. Several simple statistics have been proposed to meta-analyze p-values, all assuming that they are genuine, i.e. observations from independent standard Uniform random variables. But, as publication bias favors the studies that report "significant" p-values, when a p>0.05 is obtained for the outcome of an experiment, some researchers will "fall into temptation" and decide to replicate the experiment in the hope of getting a smaller second p-value, ideally a significant one. Consequently, if the smallest of two p-values is reported, this is a Beta(1,2) distributed "fake" p-value, not a uniformly distributed genuine p-value. This is an unacceptable scientific research practice, and moreover the detection of fake p-values is unpractical. Even when it is possible, the analytic results to accommodate their existence in combined tests are cumbersome. For an informed decision, inclusive when the presence of fake p-values in a sample of p-values to be meta-analyzed is probable, tables with simulated critical values for the usual combined testing are supplied. This will also allow comparisons to be made between several combined tests.

Article activity feed