Publication Bias

From GM-RKB
(Redirected from publication bias)
Jump to navigation Jump to search

A Publication Bias is a research bias that ...



References

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/publication_bias Retrieved:2023-1-2.
    • In published academic research, publication bias occurs when the outcome of an experiment or research study biases the decision to publish or otherwise distribute it. Publishing only results that show a significant finding disturbs the balance of findings in favor of positive results. The study of publication bias is an important topic in metascience.

      Despite similar quality of execution and design, papers with statistically significant results are three times more likely to be published than those with null results. This unduly motivates researchers to manipulate their practices to ensure statistically significant results, such as by data dredging. Many factors contribute to publication bias.[1] For instance, once a scientific finding is well established, it may become newsworthy to publish reliable papers that fail to reject the null hypothesis. Most commonly, investigators simply decline to submit results, leading to non-response bias. Investigators may also assume they made a mistake, find that the null result fails to support a known finding, lose interest in the topic, or anticipate that others will be uninterested in the null results.[2] The nature of these issues and the resulting problems form the five diseases that threaten science: "significosis, an inordinate focus on statistically significant results; neophilia, an excessive appreciation for novelty; theorrhea, a mania for new theory; arigorium, a deficiency of rigor in theoretical and empirical work; and finally, disjunctivitis, a proclivity to produce many redundant, trivial, and incoherent works." Attempts to find unpublished studies often prove difficult or are unsatisfactory.[1] In an effort to combat this problem, some journals require studies submitted for publication pre-register (before data collection and analysis) with organizations like the Center for Open Science. Other proposed strategies to detect and control for publication bias[1] include p-curve analysis and disfavoring small and non-randomized studies due to high susceptibility to error and bias.[2]

  1. 1.0 1.1 1.2 H. Rothstein, A. J. Sutton and M. Borenstein. (2005). Publication bias in meta-analysis: prevention, assessment and adjustments. Wiley. Chichester, England ; Hoboken, NJ.
  2. 2.0 2.1 Cite error: Invalid <ref> tag; no text was provided for refs named Easterbrook

2023

  • (Wikipedia, 2023) ⇒ https://en.wikipedia.org/wiki/Publication_bias#Definition Retrieved:2023-1-2.
    • Publication bias occurs when the publication of research results depends not just on the quality of the research but also on the hypothesis tested, and the significance and direction of effects detected. The subject was first discussed in 1959 by statistician Theodore Sterling to refer to fields in which "successful" research is more likely to be published. As a result, "the literature of such a field consists in substantial part of false conclusions resulting from errors of the first kind in statistical tests of significance". In the worst case, false conclusions could canonize as being true if the publication rate of negative results is too low. Publication bias is sometimes called the file-drawer effect, or file-drawer problem. This term suggests that results not supporting the hypotheses of researchers often go no further than the researchers' file drawers, leading to a bias in published research. The term "file drawer problem" was coined by psychologist Robert Rosenthal in 1979. Positive-results bias, a type of publication bias, occurs when authors are more likely to submit, or editors are more likely to accept, positive results than negative or inconclusive results. Outcome reporting bias occurs when multiple outcomes are measured and analyzed, but the reporting of these outcomes is dependent on the strength and direction of its results. A generic term coined to describe these post-hoc choices is HARKing ("Hypothesizing After the Results are Known").

2007

  • Ferguson, C. J. (2007). “Evidence for Publication Bias in Video Game Violence Effects Literature: A meta-analytic review." In: Aggression and Violent Behavior, 12. doi:10.1016/j.avb.2007.01.001

2009

  • F. Song et al (Extent of publication bias in different categories of research cohorts: a meta-analysis of empirical studies, BMC Medical Research Methodology 2009, 9:79, http://www.biomedcentral.com/1471-2288/9/79)
    • report on a meta-analysis of studies that examine a cohort of research studies for publication bias. In the studies examined, publication bias tended to occur in the form of not presenting results at conferences and not submitting them for publication. This paper also discusses different types of evidence for publication bias.

2009

  • Hopewell, S. et al (Publication Bias in Clinical Trials due to Statistical Significance or Direction of Trial Result, Cochrane Review 2009, Issue 1.
    • conclude that "Trials with positive findings are published more often, and more quickly, than trials with negative findings."

2000

  • J. Scargle (2000) Publication bias: The "file-drawer" problem in scientific inference, Journal of Scientific Exploration, Vol. 14, No. 1, pp. 91-106.

1995

  • T. D. Sterling, W. L. Rosenbaum and J. J. Weinkam (Publication Decisions Revisited: The Effect of the Outcome of Statistical Tests on the Decision to Publish and Vice Versa, The American Statistician, 1995, vol 49 No. 1, pp. 108 - 112)
    • review the literature through 1995, and report on an additional study indicating the occurrence of publication bias, with results showing statistical significance being over-represented than would be expected (although the rate depended on the field). They also provide anecdotal evidence that papers may be rejected for publication on the basis of having a result that is not statistically significant.

1979

  • R. Rosenthal (1979) The "file drawer problem" and tolerance for null results, Psychological Bulletin, Vol. 86, No. 3, 838-641.