Commentary

Many eyes are better


 

Peer review is the filter that determines what is published in the scientific literature and what is not allowed to see the light of day. It has been considered "the gatekeeper of science." In modern times, peer review has consisted of a journal editor sending a submitted article to a limited number of experts in the field who then judge its worth for publication. Many specialists see reviewing manuscripts as a service to their profession, put considerable effort into their analysis, and provide extensive comments on revisions needed to strengthen papers.

However, the value of this venerable process as it is presently constituted has been questioned by numerous critics. Because manuscripts are reviewed by a limited number of often rival scientists in a highly specialized field, the practice is prone to bias. Innovation may be stifled when reviewers reject outlier concepts that may be correct but that do not fit into the mainstream of thought. Critiques are often superficial as they are performed by otherwise busy individuals who receive no compensation for their efforts. Finally, journal editors are given undue power in the process. Not only do they make the final decision to publish or reject a manuscript, they are also responsible for selecting reviewers and are then free to ignore or accept their recommendations.

These criticisms of this time-honored system have led to a strong impetus for change. Coincident with the motivation for modifying this essential component of the publication process have been technological advancements that are facilitating new approaches. The Internet has provided a mechanism for making peer review a more open, inclusive process with any member of the scientific community who so desires having the opportunity to contribute to the evaluation of published work. A recent incident highlights how uninvited, but valuable, input from the wider scientific community can rapidly and effectively improve the accuracy of the literature.

On July 2, 2014, National Public Radio’s Morning Edition broadcast "Easy method for making stem cells was too good to be true" and a New York Times headline proclaimed "Stem cell research papers are retracted." In January 2014, Haruko Obokata of the RIKEN Centre for Developmental Biology in Japan published an innovative and considerably simpler method of producing stem cells than extracting them from embryos or making them from skin cells in a complicated and prolonged process (Nature 2014;505: 641-7). At the time of publication, the work was viewed by many to be potentially Nobel Prize-worthy research—that is, until the stem cell research community chimed in with its extensive and detailed post-publication peer review.

A number of research groups questioned Obokata’s conclusions. Some even attempted to replicate the experiments, but with no success. Soon the critics’ findings and opinions appeared on a variety of websites and blogs, including the Nature website. The RIKEN Centre took notice and appointed a committee to investigate the research. The committee found that Obokata had manipulated her data on at least two occasions and concluded that she had participated in research misconduct. Pressure mounted, which led to the recent voluntary retraction of the article by its authors.

This case represents an extreme outcome resulting from a failure of pre-publication vetting followed by a successful post-hoc peer review. But it demonstrates how the emerging and more comprehensive means of evaluating published research is rapidly working its way into the fabric of how science and the reporting of it operate. It would be ideal if this extensive vetting of potentially important research could be done prior to rather than after its release to the general public. Physics academicians have accomplished this by posting their research papers as pre-prints on-line for their colleagues to evaluate. Only after a successful conclusion of this process is a work deemed acceptable for entry into the physics literature. Submission of biomedical research to a similar process has a significant downside in that new, possibly harmful therapies, not yet peer-reviewed, could be adopted by practitioners and/or patients before their time. However, some modification of it will likely evolve and lead to a more accurate assessment of submitted work than the present process allows.

How publications are valued is also being modified thanks to the omnibus means of rapid communication allowed by an ever-expanding Internet. Bibliometrics, most notably the number of times an article is subsequently cited in print, has been the mainstay in determining the value of individual articles. The journal impact factor, which has historically been the main measure of a journal\'s standing compared to that of others in its field, is derived from the aggregate of citations for all articles over a period of time. With the advent of the Internet, a new set of alternative metrics (altmetrics) is now contributing to the evaluation of published work. While print citations take years to accumulate, article downloads, mentions on Facebook, number of tweets on Twitter, and numerous other altmetrics have the considerable advantage of immediacy and can be logged by any reader, not only those authors who decide to subsequently cite certain publications. Although these new metrics are unlikely to replace traditional citations in assigning value to individual articles, along with them, they will be helpful in determining what must be read to maintain currency in one’s specialty.

Next Article:

   Comments ()